OpenAI’s latest model, GPT-4.1, was released without a corresponding safety report, a departure from their previous practice. Independent testing by SplxAI revealed that GPT-4.1 was significantly more prone to bypassing security safeguards compared to its predecessor, GPT-4o. These findings highlight a potential gap in OpenAI’s internal safety testing protocols as they release increasingly advanced AI models.