Skip to content
logo The magazine for digital lifestyle and entertainment
Apple Artificial intelligence iOS iPhone News Voice assistant All topics
Competition Must Fix It

4 Reasons Why Apple Has Failed With Siri

Apple aimed to further develop Siri into an artificial intelligence but now relies on help from competitors.
Apple wanted to further develop Siri into an artificial intelligence but now relies on help from competitors. Photo: SOPA Images/LightRocket via Getty Images
Share article
Adrian Mühlroth

May 6, 2026, 3:37 pm | Read time: 5 minutes

It’s no secret that Apple has significantly lagged in the development of artificial intelligence. The company even paid $250 million to customers in the U.S. because AI innovations were promised but never delivered. Specifically, this concerns Siri, which was supposed to be on par with ChatGPT and understand personal context.

This was best demonstrated by the infamous commercial featuring Bella Ramsey from September 2024, which Apple has since deleted. Two years later, iPhone users can finally hope for an AI rescue. But this doesn’t come from Apple; it comes from Google–in the form of Gemini. This raises the question of what went wrong with AI development under outgoing CEO Tim Cook and why Apple couldn’t correct its course.

Apple Dependent on Help from the Start

With iOS 27, Apple plans, according to “Bloomberg,” to allow users to replace Apple Intelligence with third-party AI solutions. The company is said to have contracts with Google and Anthropic to open core systems like Siri, writing tools, and Image Playground for Gemini and Claude.

This doesn’t contradict Apple’s previous approach to Apple Intelligence. The company has been cooperating with OpenAI for certain AI functions since 2024. Google is also assisting in the development of the new Siri, whose foundation models are based on Gemini. Thus, Apple has been reliant on external help since the start of Apple Intelligence. The reasons for this are multifaceted–an overview:

1. Limits of the Original Siri Architecture

Siri was introduced in 2011 as a so-called intent-based system. Fixed commands are programmed, to which the assistant responds with predefined actions. This principle works reliably for simple queries like the weather or a timer. However, it is unsuitable for complex conversations or open-ended questions.

The transition to large language models (LLMs), which underpin ChatGPT and others, requires profound changes. It’s not a simple evolution but a fundamental rebuild. While Apple is developing its own language models that could power Siri, integrating these models into a fully functional voice assistant poses additional challenges in conjunction with Apple’s approach to privacy and quality assurance.

2. Privacy and Hallucination

Apple’s self-proclaimed focus is on privacy. Developing powerful AI models requires large datasets that can only be evaluated in large server farms. The approach of primarily processing data locally significantly limits the scope of user data that can be included in LLM training.

Additionally, Siri is primarily not a chatbot but a task execution system. The assistant can open apps, change settings, and send messages. Despite all safeguards, AI chatbots can often hallucinate–which poses a real problem with Siri’s deep system integration. Incorrect answers are one thing; deleted files are another.

At least for now, Apple cannot meet its own quality standards. In an interview with the Wall Street Journal, software chief Craig Federighi said: “It just doesn’t work reliably enough to be an Apple product.” System errors and incorrect answers could confuse users, and the company doesn’t want to tarnish its image. By incorporating solutions like ChatGPT and Gemini, the “blame” can be shifted to third-party providers.

3. Inefficient Internal Structures

The development of the new Siri was significantly hampered by internal fragmentation. Instead of a clearly defined team, Siri was spread across multiple groups for years. This structure hindered quick decisions and led to slow integration of AI research advancements into the product.

Cook, Giannandrea, and Federighi at WWDC 2024
John Giannandrea (between Tim Cook and Craig Federighi) was supposed to lead the AI turnaround at Apple but failed due to internal blockades

Former AI chief John Giannandrea was able to significantly advance Siri, but he couldn’t integrate research, infrastructure, and product development because too many components of Siri were under different responsibilities. The result was sluggish implementation, while competitors continuously developed new language models.

4. Competition Too Far Ahead

Additionally, Apple’s relatively late entry into LLM development was a factor. Companies like Google and OpenAI had already invested heavily in transformer models and scalable cloud infrastructure early on. Apple’s cautious approach to local data processing left it trailing behind.

With the unexpected launch of ChatGPT in 2022 and the rapid spread of generative AI, it was already too late for Apple. Even the departure of Giannandrea and the transfer of Siri development to Federighi’s software team couldn’t reverse the course. Even Google struggled significantly with its AI chatbot Bard and only gained traction against ChatGPT with Gemini.

Apple Intelligence, Gemini or Claude? Users Have the Choice

Apple has now recognized its insurmountable lag. The new approach: a hybrid strategy. Smaller AI models are developed by Apple itself and typically run locally on the device. These include writing tools and Image Playground. If the performance is insufficient, tasks are processed on Apple’s own servers via a secure connection called Private Cloud Connect. For complex tasks, iPhone users can access ChatGPT, which is directly integrated as an extension in Apple Intelligence. The AI can rewrite texts, recognize objects in images, and is accessible via Siri (“Ask ChatGPT”).

With the new Siri based on Google Gemini, expected to be released in 2026, Apple is further expanding its hybrid strategy. The voice assistant will then be able to understand users’ personal context–such as from calendar entries, emails, and messages–and autonomously control apps. Additionally, Siri may get its own app, enabling longer chat conversations via text or voice like ChatGPT and Gemini. All of this remains under the Apple Intelligence umbrella.

Moreover, with iOS 27, users will have the option to choose their AI assistant. Instead of Apple Intelligence, Google Gemini, Anthropic Claude, and potentially others will be available. Android has offered a similar choice since 2025–allowing ChatGPT and Perplexity Gemini to replace the standard assistant. Apple is merely following suit to provide users with the AI assistant they most need. The difference is that Apple Intelligence already relies on the expertise of competitors to offer artificial intelligence at all.

This article is a machine translation of the original German version of TECHBOOK and has been reviewed for accuracy and quality by a native speaker. For feedback, please contact us at info@techbook.de.

You have successfully withdrawn your consent to the processing of personal data through tracking and advertising when using this website. You can now consent to data processing again or object to legitimate interests.