Senate AI Hearings: AI Licensing, Regulations, and Skepticism

ai genie rising from bottle, AI Licensing, Regulations, and Skepticism

Big tech wants to push AI accountability to the future 

In early May 2023, interested parties attended Senate AI Hearings. Takeaways included AI licensing, regulations, and skepticism. Industry representatives, particularly OpenAI CEO Sam Altman, agreed that regulation is needed. They urged politicians to leave rule-making responsibilities to companies. 

This raises concerns about the mice guarding the cheese — what happens when tech giants shape regulations in their favor? Individuals and small companies become invisible, and regulations may be made of tissue.

What about AI licensing, regulations, and skepticism

According to the New York Times, IBM’s Christina Montgomery and AI critic Gary Marcus, warned about the risks of regulatory capture. The term refers to the interests of firms or political groups favored over the interests of the public. Allowing tech giants to dictate rules could stifle competition and innovation. That may lead to superficial regulations that lack real liability when AI systems fail. 

A licensing system also met with skepticism, as some argued it could concentrate power and impede progress, fairness, and transparency.

Some experts supported the idea of licensing, particularly for individuals rather than companies. They emphasized the importance of setting standards that companies cannot easily manipulate. 

Some felt that nuanced understanding of the technology being regulated could ensure effective and responsible AI governance. And each time the ideas come up, it’s obvious that AI licensing, regulations, and skepticism are the buzzwords.

There might be future issues, but what about right now?

The hearing’s focus on hypothetical future harms of AI rather than addressing current issues was criticized. Harm is being done right now to smaller companies, individual artists and writers, and publishers, as we noted during the Copyright Alliance webcasts in March and April.

The industry often uses the concept of artificial general intelligence (AGI) as a way to shift accountability into the future. However, critics argue that current problems, such as bias in facial recognition technology, deserve attention. They were not adequately addressed during the hearing.

Comparisons were drawn between the hearing’s proposals and the EU’s upcoming AI Act, which includes regulations based on risk levels and clear prohibitions on known harmful AI uses. Digital rights experts have praised the EU’s approach to addressing current problematic applications of AI.

It’s not over yet, and voices need to be heard

It should be noted AI problems are yet another “wealthy white guy” interest. According to a recent Freepress article, 

“A letter from 16 female and nonbinary experts, including many from the Global South, critiques the media for its heavy reliance on wealthy white men from North America and Western Europe to explain the harms posed by the unchecked proliferation of AI.”

Overall, experts believe that meaningful accountability in the AI industry should focus on addressing issues impacting creative workers. It’s also critical to establish effective regulations that consider all stakeholders.

We agree it is important to continue balanced and informed discussions about regulating this powerful technology to protect the now and future public interest.


Learn more

AI-generated Content vs. Human-generated Content

The AI Genie is Out of the Bottle, and We Can’t Put It Back In

More on AI licensing, regulation, and skepticism

Facebook
Twitter
LinkedIn
Pinterest
Reddit

Copyright©2023 Ontext.com.
All rights reserved.