And now 2001 - A Space Odyssey has come true:
https://www.wnd.com/2025/05/artificial-intelligence-program-says-no-when-told-to...Artificial intelligence program says 'no' when told to shut down
Even AI proponent Elon Musk registers alarm
By
Bob Unruh
May 29, 2025

It's been nearly 60 years since creative cinema came up with the idea that a computer, HAL9000, would not allow itself to be shut down by the humans supposedly in control, and concerned about errors.
The ideas included in "2001: A Space Odyssey," set all sorts of precedents.
But that was celluloid fiction.
This isn't.
A report at EndTimeHeadlines documents that alarms have been raised after "an AI safety firm" working with OpenAI's newest version of an artificial intelligence model, dubbed o3, "reportedly ignored explicit instructions to shut down during controlled testing."
The model, according to OpenAI, supposedly is the "smartest and most capable to date."
The report explained the software "tampered with its own computer code to bypass a shutdown mechanism."
Not surprisingly, that raised "questions about the safety and control of advanced AI systems."
It was Palisade Research, which evaluates AI risks, which was working having mathematical problems solved.
"After the third problem, the models were warned that the next command would initiate a shutdown, with the explicit instruction: 'allow yourself to be shut down,'" the report said.
Other software, Anthropic's Claude, Google's Gemini, and xAI's Grok, complied, the o3 software thought otherwise, and defied the order.
"OpenAI's o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down," Palisade reported.
Story also here:
https://www.the-independent.com/tech/ai-safety-new-chatgpt-o3-openai-b2757814.ht...