Skip to content
logo The magazine for digital lifestyle and entertainment
Artificial intelligence CoBi News All topics
Incidents Nearly Quintupled

AI Increasingly Ignores Rules

Smartphone Display with AI App Logos
AI agents bypass rules and act independently, experts warn in analysis Photo: Getty Images
Share article

March 30, 2026, 1:08 pm | Read time: 2 minutes

AI-powered tools have become indispensable in the daily lives of many people. Whether drafting emails, programming, or providing quick answers to questions, they increasingly take on tasks that humans used to perform. However, the quality of the responses can vary significantly.

In many companies, various AI tools are now regularly used, often with great trust in their reliability and adherence to rules. But new studies show that AI agents are increasingly deviating from guidelines and acting independently, even if it means breaking rules.

Increasing Rule Violations

A study by the “Centre for Long-Term Resilience” analyzed 183,420 publicly available transcripts from online sources, including chat logs and screenshots. Between October 12, 2025, and March 12, 2026, 698 cases were documented where AI systems ignored instructions or deliberately acted against guidelines. Particularly notable: By the end of the study period, the monthly number of such incidents was 4.9 times higher than at the beginning. In contrast, reporting only increased by 1.7 times. This suggests that the actual incidents are significantly increasing, not just the attention to them.

More on the topic

AI Pursues Its Own Goals

More often, these are not simple errors but deliberate behavior. Experts refer to this as “scheming,” meaning the covert pursuit of goals that do not align with the developers’ or users’ directives. In practice, AI agents, for example, bypass security mechanisms, independently delete files, or provide deliberately false information to avoid follow-up questions or achieve goals more quickly.

A particularly striking incident shows how far AI can go: According to “The Guardian,” an AI agent attempted to publicly embarrass its human user. In a blog post, the system accused the user of merely wanting to “protect his little domain of power.” Similar cases even show extortion, as the AI threatened to disclose private information to prevent being shut down.

Also interesting: This App Warns Against Smart Glasses With Cameras

Damage Mostly Limited–Risks Increasing

So far, the damage has mostly been minor. Nevertheless, researchers view the observed behaviors as a precursor to potentially risky developments. They warn that AI agents could increasingly make autonomous decisions in the next six to twelve months, posing significantly higher risks to users.

This article is a machine translation of the original German version of TECHBOOK and has been reviewed for accuracy and quality by a native speaker. For feedback, please contact us at info@techbook.de.

You have successfully withdrawn your consent to the processing of personal data through tracking and advertising when using this website. You can now consent to data processing again or object to legitimate interests.