OpenAI slashes AI model safety testing time

OpenAI has reportedly reduced the time allocated for safety testing its most powerful AI models, raising concerns about potential risks being overlooked in the rush to release new technology. This acceleration is driven by competitive pressures in the AI industry, as OpenAI aims to stay ahead of rivals like Google and Meta. Experts warn that this decreased testing time could lead to the deployment of models with unforeseen safety issues, especially as these models become more capable.

Read the article.

Leave a Comment