Apple Considers OpenAI or Anthropic for Siri's Future, an Opportunistic Endeavor with Some Risks
By Benjamin Chan |
09 Jul 2025 |
IN-7883
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
By Benjamin Chan |
09 Jul 2025 |
IN-7883
Apple's Consideration of Adopting OpenAI or Anthropic in Siri Going Forward |
NEWS |
In July 2025, Bloomberg reported that Apple is weighing its options for licensing Artificial Intelligence (AI) technology from Anthropic PBC or OpenAI to power a newer version of Siri, potentially sidelining its in-house foundation models in development. Bloomberg states that Apple has asked both AI companies to train versions of their models on Apple’s cloud infrastructure for testing. Earlier in June 2025, reports from PYMNTS cited that the Apple intelligence team faced significant challenges in developing its own Large Language Model (LLM) for Siri’s conversational features, an issue that other AI giants, like OpenAI, did not encounter when building Generative Artificial Intelligence (Gen AI)-based voice assistance.
Apple's Divestment Is a Sensible Move, but Is It Risking Its Competitive Advantage? |
IMPACT |
The technology giant’s decision to explore options outside of its ecosystem development is rare and could be seen as having varied impacts. On one hand, licensing third-party AI software fully integrated into native hardware reflects similar market developments. Companies like Samsung have incorporated Google Gemini as an AI-powered voice assistant under the Galaxy AI umbrella and Awesome Intelligence. Similarly, Claude models from Anthropic are also used to power Amazon’s premium voice assistant option, Alexa+.
Additionally, the company’s move to consider divesting from AI investments would significantly change how many perceive the AI development race. Following last year’s announcement of Apple Intelligence—after a prolonged period of limited public AI development—many anticipated that Apple would ramp up its investments to compete directly with industry leaders like Microsoft and Google. However, Apple’s decision to step away from developing its own foundational AI models and rely on a third-party vendor is sensible, considering Meta’s recent aggressive strategy of outspending its rivals and poaching key OpenAI executives as part of an intensifying arms race. Given that Apple Intelligence’s development and initial deployment in devices have been limited, a divestment strategy away from AI is logical if the company can avoid a sunk cost trap on development in newer fields that are not its key expertise.
However, integrating third-party AI software as part of Apple's native application suite could potentially complicate the company’s navigation of user privacy policies, one of its most distinctive marketing strategies. This marks a considerable shift in its traditional strategy, where Apple is known for maintaining a tightly controlled, vertically integrated ecosystem that allows it to set strict privacy controls on data dissemination. As Apple integrates third-party AI models into its core applications, it faces heightened exposures on multiple fronts to the evolving regulatory landscape that may mandate the retention and oversight of conversational data generated. Instead, the company must mitigate the external data flow, as its competitive advantage in maintaining user privacy could weaken due to potential regulations targeting commercially available AI models. This move could force Apple to reconcile its privacy-first branding, a risk that may erode user trust and complicate Apple’s ability to maintain a consistent global privacy standard.
The Equally Critical Roles That Device Manufacturers and AI Platform Providers Need to Play in the On-Device AI Market |
RECOMMENDATIONS |
Apple’s failure to develop a competitive model alongside OpenAI, Meta, and Google demonstrates how difficult it is to create foundational models in LLMs, even with the support of a trillion-dollar technology giant. However, integrating on-device AI in voice assistants or other forms is rapidly approaching as the norm, which leaves device manufacturers and AI platforms plenty of opportunities to work closely together. Rapid adoption and integration of Gen AI is expected to form a key part of revenue in AI software, as ABI Research estimates that the overall AI software revenue globally will reach US$467 billion in 2030, up from US$122 billion in 2024.
Some of the strategies that AI platform providers should consider to solidify their market reach are:
- Accelerating Ecosystem Partnership: As more major Original Equipment Manufacturers (OEMs) adopt third-party AI solutions, AI software platforms should prioritize building flexible and scalable foundational models that can be seamlessly integrated into other hardware and/or software products.
- Invest in Differentiation: As players like Apple step away from building their own foundational AI models, there is an opportunity to double down on model innovation, like multimodal capabilities and developer tools, to increase model adaptiveness and differentiation.
- Regulatory Complexity Preparedness: Considering recent developments in regulating AI, AI software platforms should take a proactive approach to develop compliance frameworks around data retention, transparency, and user content in anticipation of increasing scrutiny from the higher adoption of cloud-based LLMs on consumer devices.
On the other hand, device manufacturers looking to keep up with market trends with third-party AI integrations should consider:
- Balancing Integration and Control: When considering third-party AI integration, device manufacturers need to consider factors such as data residency, privacy controls, and modularity to minimize regulatory and reputational risks.
- Model Selection and Modularity: Evaluation of the strengths of current LLMs for specific device features such as voice assistance, summarization, or multimodality functions will be a key consideration in deployment. Additionally, device architectures should allow for diversification in the long run, as newer models should not be locked out by legacy architecture, allowing the technology and partnership to evolve.
- Regulatory Compliance: Implementing choices for users to opt-in/opt-out of AI usage, paired with region-specific data controls, can help navigate regulatory complexities, while building the necessary safeguards and guardrails to protect user data.
Written by Benjamin Chan
- Competitive & Market Intelligence
- Executive & C-Suite
- Marketing
- Product Strategy
- Startup Leader & Founder
- Users & Implementers
Job Role
- Telco & Communications
- Hyperscalers
- Industrial & Manufacturing
- Semiconductor
- Supply Chain
- Industry & Trade Organizations
Industry
Services
Spotlights
5G, Cloud & Networks
- 5G Devices, Smartphones & Wearables
- 5G, 6G & Open RAN
- Cellular Standards & Intellectual Property Rights
- Cloud
- Enterprise Connectivity
- Space Technologies & Innovation
- Telco AI
AI & Robotics
Automotive
Bluetooth, Wi-Fi & Short Range Wireless
Cyber & Digital Security
- Citizen Digital Identity
- Digital Payment Technologies
- eSIM & SIM Solutions
- Quantum Safe Technologies
- Trusted Device Solutions