Our politicians are in charge of protecting the public from the potential risks of AI while also encouraging innovation. This rather tricky balancing act is crucial as AI's influence extends from the workplace to the political sphere.
Congress, in a rare moment of open honesty, has acknowledged they don’t really know what the hell they’re talking about, when it comes to AI – let alone how to legislate it. They just know something needs to be done. This sentiment is shared by most tech industry leaders, who agree that this rapidly evolving tech needs thoughtful regulation.
The introduction of ChatGPT has brought both the capabilities and dangers of generative AI into sharp focus for the American public. Lawmakers are now wrestling with a range of issues, from the potential for AI-assisted plagiarism to concerns about AI gaining self-awareness and going full-on Skynet on us! They're dealing with challenges ranging from detecting AI-generated content, unclear accountability, potential job disruptions, and the protection of intellectual property rights.
AI also presents serious risks, such as its use in spreading disinformation during elections. Imagine taking the Stop the Steal disinformation campaign (which over 50% of Republicans still believe!) and bolstering it with artificial intelligence that can create realistic images and videos of election workers, politicians and even Biden himself? Bad actors here and in Russia are already salivating over such prospects. We have art, music and novels being replicated without permission. Worse yet, imagine AI developing autonomous weapons that can fly around and decide for themselves, who they should shoot. A world in which humans do the labor and robots create music and art, and decide who and how to police and control humans and even elections, is not the future I want. We’re in serious trouble, folks. We must regulate AI!
Okay that’s the Dark Side. But let's not forget that AI is already making huge breakthroughs in science, medicine, healthcare, education and more. Lawmakers are aware of the importance of not stifling innovation and maintaining the U.S.'s position as a leader in technology.
Tricky stuff, eh?
Right now, Congress is engaging in their usual range of activities. Both sides of the aisle are publicly seeking expert advice, publicly participating in educational forums, publicly developing regulatory frameworks, and of course, publicly forming committees. Notice a repeating word there?
Yes I believe this is mostly political posturing by both Democrats and Republicans, only a few of whom haven’t handed the heavy lifting off to staffers. The good news is, we’ve finally found something that Democrats, Republicans and Corporate Leaders can all agree needs addressing. We must regulate AI.
But as I’ve said before, history is working against us. I was pleasantly surprised when Biden pro-actively signed an EO outlining what needs to be done. Usually the pattern is reactive to a crisis, rather than proactive, before one occurs.
For example, the Sarbanes-Oxley Act was passed after the Enron scandal, and the Dodd-Frank Act was introduced after the 2008 financial crisis. While both hindsight and foresight predicts the government will be too late to stop the AI related black swan event, my hope is that this will be a rare exception.
Thus, while everyone from Chuck Schumer to Josh Hawley is stumping speeches on the subject, what we need are comprehensive bills that address how to protect data privacy, regulate the use of generative AI in political ads, prevent the creation of harmful AI-generated content or worse, weapons etc. and still allow industry to advance and innovate. Like I said, this one is going to be tricky, to say the least.
Frederick Shelton is the CEO of Shelton & Steele, a national firm of legal recruiters and consultants who specialize in Rainmaking and AI for lawyers and law firms. He can be reached at fs@sheltonsteele.com
Comments