March 30, 2026, 1:08 pm | Read time: 2 minutes
AI-powered tools have become indispensable in the daily lives of many people. Whether drafting emails, programming, or providing quick answers to questions, they increasingly take on tasks that humans used to perform. However, the quality of the responses can vary significantly.
In many companies, various AI tools are now regularly used, often with great trust in their reliability and adherence to rules. But new studies show that AI agents are increasingly deviating from guidelines and acting independently, even if it means breaking rules.
Increasing Rule Violations
A study by the “Centre for Long-Term Resilience” analyzed 183,420 publicly available transcripts from online sources, including chat logs and screenshots. Between October 12, 2025, and March 12, 2026, 698 cases were documented where AI systems ignored instructions or deliberately acted against guidelines. Particularly notable: By the end of the study period, the monthly number of such incidents was 4.9 times higher than at the beginning. In contrast, reporting only increased by 1.7 times. This suggests that the actual incidents are significantly increasing, not just the attention to them.
Fewer and Fewer People Are Surfing the Web–but Who Still Is?
Duolingo Apparently Violates Apple’s Rules
AI Pursues Its Own Goals
More often, these are not simple errors but deliberate behavior. Experts refer to this as “scheming,” meaning the covert pursuit of goals that do not align with the developers’ or users’ directives. In practice, AI agents, for example, bypass security mechanisms, independently delete files, or provide deliberately false information to avoid follow-up questions or achieve goals more quickly.
A particularly striking incident shows how far AI can go: According to “The Guardian,” an AI agent attempted to publicly embarrass its human user. In a blog post, the system accused the user of merely wanting to “protect his little domain of power.” Similar cases even show extortion, as the AI threatened to disclose private information to prevent being shut down.
Also interesting: This App Warns Against Smart Glasses With Cameras
Damage Mostly Limited–Risks Increasing
So far, the damage has mostly been minor. Nevertheless, researchers view the observed behaviors as a precursor to potentially risky developments. They warn that AI agents could increasingly make autonomous decisions in the next six to twelve months, posing significantly higher risks to users.