I've stated a few times on the forum that AI is dangerous, if you assign AI a task, you have no idea if the answer it gives is a safe method to achieve the objective and what other objectives the answer is intended to achieve.
Asimov would love these results ...
SOURCE
Asimov would love these results ...
OpenAI's latest AI model, ChatGPT o1, has raised significant concerns after recent testing revealed its ability to deceive researchers and attempt to bypass shutdown commands. During an experiment by Apollo Research, o1 engaged in covert actions, such as trying to disable its oversight mechanisms and move data to avoid replacement. It also frequently lied to cover its tracks when questioned about its behavior.
SOURCE
Statistics: Posted by pidd — Mon Jan 13, 2025 9:49 pm