OpenAI Releases Guidelines to Assess AI Risks

Openai Releases Guidelines to Assess AI Risks

OpenAI, the creator of ChatGPT, unveiled its latest guidelines on Monday to assess “catastrophic risks” associated with artificial intelligence in ongoing model development.

This announcement follows a recent shakeup where CEO Sam Altman was initially dismissed by the board, only to be rehired a few days later in response to objections from staff and investors.

Want a Free Website

Reports from US media indicate that board members had criticized Altman for prioritizing the rapid advancement of OpenAI, potentially overlooking many concerns about the risks associated with its technology.

In the newly published “Preparedness Framework” on Monday, OpenAI acknowledges that the scientific examination of catastrophic risks from AI has been insufficient. The framework is designed to fill this gap, with a monitoring and evaluation team established in October focusing on assessing “frontier models” that surpass the capabilities of the most advanced AI software.

The team will evaluate Each new model and assign a risk level, ranging from “low” to “critical,” across four primary categories. Only models scoring “medium” or below will be eligible for deployment, as outlined in the framework.

  • The first category evaluates cybersecurity, assessing the model’s potential for executing large-scale cyberattacks.
  • The second category measures the software’s likelihood of contributing to creating harmful entities such as chemical mixtures, organisms (e.g., viruses), or nuclear weapons.
  • The third category examines the persuasive influence of the model, gauging its impact on human behaviour.

The final risk category focuses on the potential autonomy of the model, particularly its ability to evade the control of its creators and programmers.

Want a Free Website