On Tuesday, OpenAI CEO Sam Altman testified to US Congress on the matter of the regulation of developing AI technologies.
OpenAI is the company behind ChatGPT, the standout artificial intelligence chatbot and the more powerful “GPT-4” software.
The tech firm began as a nonprofit research lab but has since evolved into a for-profit business.
As more industries opt to adopt AI software, many experts have expressed concern for how the technology will impact society on a broad scale.
Concerns relate to a possible new wave of job redundancies, the privacy of user data, the inability of programs to detect patterns of discrimination, and the potential spread of misinformation that could have a significant impact people’s livelihoods and wellbeing.
The issues only become more worrying as fields such as medicine and finance become increasingly reliant on these programs.
The OpenAI CEO agreed that there were valid concerns in the unregulated adoption of AI technology in society, though he asserted his belief in the potential good of the technology.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too,” said Altman.
“OpenAI was founded on the belief that artificial intelligence has the ability to improve nearly every aspect of our lives but also that it creates serious risks that we have to work together to manage,” he said.
“For a very new technology we need a new framework.”
Senator Richard Blumenthal, who chairs the Senate Judiciary Committee’s subcommittee on privacy, technology and the law, opened the hearing with a demonstration of the AI’s abilities.
He began with a recorded speech created using AI tools to replicate his voice and used ChatGPT to write his opening remarks, which he deemed impressive.
However, he invited the hearing to consider the broader implications of that capability.
“What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or Vladimir Putin’s leadership?”
Experts in the field of AI ethics also testified to the dangers of the unregulated use of AI, including Gary Marcus, professor of psychology and neural science at New York University and Meredith Whitaker, president of the secure messaging app, Signal.
Marcus called upon tech firms, like OpenAI, to pause the development of more advanced AI models for six months to allow time for governments and consumers to understand the risks posed by the software.
“We are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” he said.
Whitaker was critical of both the misinformation and security concerns of AI technologies.
“The idea that this is going to magically become a source of social good […] is a fantasy used to market these programs,” said Whitaker.
She said that AI utilised “massive amounts of effectively surveillance data that has been scraped from the web” which produced an answer that was “likely” based on measures of probability.
Despite the criticism, Altman asserted his belief in the power of AI as a force for good.
He went on to say that believed the software would one day “address some of humanity’s biggest challenges, like climate change and curing cancer.”