![](https://pyrethra.com/wp-content/uploads/2024/09/1706620666988.jpeg)
Merve Hickok
I was honored to have Merve Hickok , President and Research Director at the Center for AI and Digital Policy (CAIDP) as a panelist as part of the “Artificial Intelligence: A Global Landscape” event. Serve shared her insights on AI Regulatory efforts in the United States
Below is a summary of her contribution:
Traditionally, the United States has favored voluntary governance over regulatory measures. There is currently no federal-level privacy legislation in place.
Until last year, numerous AI policies were centered around voluntary risk management and commitments. However, many policymakers in the United States have come to realize that relying solely on soft-law and voluntary measures may have been a mistake, particularly considering the ongoing challenges associated with social media.
There is now a bipartisan consensus in the United States that an Artificial Intelligence Regulation is necessary to safeguard democracy, civil rights, and competition. It is a complex political landscape, but this consensus underscores a shared perspective on this matter. While voluntary commitments remain important, they are deemed insufficient.
This shift has been accompanied by four significant developments:
The White House and the Office of Science and Technology have prioritized AI governance. The Blueprint for AI Bill of Rights, plus the active involvement of President Biden and Vice President Harris, alongside industry and civil society, with a focus on civil rights and democracy.
The mainstreaming of the conversation around AI, particularly with the launch of generative AI, has prompted increased involvement from lawmakers due to emerging risks such as deepfakes and copyright challenges.
Industry and academia have reached a consensus on the need for regulation, albeit with varying motivations. Some advocate for the protection of competition and legal clarity for product development, while others emphasize concerns related to the rule of law, labor rights, consumer rights, and human rights.
And finally, the United States does not operate in isolation. Various countries, including the EU countries, Canada, China, Brazil, Chile, and the UK, have been actively developing and testing AI policies. There is a global consensus that AI requires governance to ensure safety, security, and trustworthiness for sustained investment and adoption. Consequently, the U.S. has become more proactive in response to the many layers of global AI policymaking.
Regarding Artificial Intelligence, human rights, and democracy:
Regulation concerning misinformation in AI is an evolving area, addressing issues such as social media misinformation, disinformation, the use of granular personal data for microtargeting, and the emergence of deepfake technology.
The sophistication of deepfakes facilitated by generative AI has lowered the threshold and cost for malicious actors to create misleading content, leading to instances of fraud and blackmail.
This trend poses risks during elections, where users unaware of the limitations of AI systems may be misled, impacting national and local elections. Disinformation spread through generative AI can erode trust, influence voter turnout, and have profound implications for election officials, journalism, and overall democratic processes. It is crucial for users to discern credible information sources and understand the limitations of generative AI systems to mitigate these risks.
The views expressed in this article are those of Merve Hickok.