OpenAI creates team to control risks associated with superintelligent artificial intelligence

The announcement comes as governments debate how to regulate artificial intelligence technologies.

OpenAI has announced the creation of a specialized team aimed at managing the risks associated with superintelligent artificial intelligence. Superintelligence is considered a hypothetical model of artificial intelligence that outperforms the most talented and intelligent human, possessing high abilities in many areas, not just one, as was the case with previous generations of models. OpenAI believes that such a model may appear by the end of the decade.

“Superintelligence has the potential to be the most influential technology humanity has ever created, and it can help us solve many of the world’s most important problems,” the nonprofit organization said. “However, the enormous power of superintelligence can be very dangerous and can lead to the loss of control of the situation by humanity or even to its extinction.”

The new team will be co-led by Ilya Sutskever, OpenAI’s chief scientist, and Jan Leike, head of the alignment lab. In addition, OpenAI has announced that it will dedicate 20% of its available computing power to this initiative to develop an automated alignment explorer. Such a system will theoretically help OpenAI ensure that superintelligence is safe to use and consistent with human values. “This is an extremely ambitious goal and we do not guarantee its success, but we are optimistic that focused efforts can solve this problem,” OpenAI said. “There are many ideas that have shown promise in previous experiments, we have increasingly useful metrics to measure progress, and we can use modern models to empirically study many of these problems.” The lab added that it will share its plan in the future.

Today’s announcement comes at a time when governments around the world are considering how to regulate the fledgling artificial intelligence industry. In the United States, OpenAI CEO Sam Altman has met with at least 100 federal lawmakers in recent months. He has openly stated that regulating artificial intelligence is “important” and that OpenAI looks forward to working with lawmakers. However, we should be skeptical of such claims, and indeed of initiatives such as the OpenAI Super Alignment Team. By focusing the public’s attention on hypothetical risks that may never occur, organizations like OpenAI are shifting the burden of regulation to the future, not the present. Much more urgent are the issues of the interaction between artificial intelligence and labor, disinformation and copyright, which legislators need to address today, not tomorrow.

Source engadget
You might also like
Comments
Loading...

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More