Apple and Google announced a multi-year collaboration under which future Apple Foundation Models will be built on Google’s Gemini models and cloud technology, a move intended to power upgraded Apple Intelligence features including a more personalized Siri later in 2026.
Official confirmations
Apple and Google statements
Apple told CNBC that “after careful evaluation, we determined that Google’s technology provides the most capable foundation for Apple Foundation Models and we’re excited about the innovative new experiences it will unlock for our users.” Google’s public post described the collaboration as multi-year, noting Gemini models and Google Cloud will help power future Apple Intelligence features, including a more personalized Siri coming later this year.
Joint Statement: Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a…
— News from Google (@NewsFromGoogle) January 12, 2026
Independent reporting
Bloomberg reporting has independently described the deal as substantial in scale and commercial terms, citing sources who say Apple will pay for access to large Gemini variants and related cloud capacity. CNBC and other outlets confirmed Apple’s direct statement to Jim Cramer, reinforcing that both companies acknowledged the partnership publicly.
What the partnership covers
Foundation models and cloud integration
Under the agreement, Apple Foundation Models for selected features will be based on Google’s Gemini stack and Google Cloud infrastructure. Apple reiterated that Apple Intelligence will continue to run on Apple devices and on Private Cloud Compute when appropriate, establishing a hybrid on-device and cloud architecture.
Impact on Siri and Apple Intelligence
Gemini-backed models are expected to support more context-aware and personalized Siri features and to extend capabilities announced at WWDC 2024, such as notification summaries, writing tools, and image generation. Apple previously delayed some Siri upgrades while refining model performance and privacy controls.
Background and reported terms
Reported financials and model scale
Earlier reporting indicated Apple had considered several vendors and was negotiating terms that could reach roughly $1 billion per year to access a large Gemini variant. Reports also suggested Apple might use a Gemini model variant with on the order of 1.2 trillion parameters for complex tasks. Neither company confirmed those specific figures in the joint announcement.
Strategic context
The collaboration follows months of industry reporting and internal Apple work to build Apple Intelligence. Outsourcing parts of foundation model infrastructure echoes past examples where Apple temporarily relied on external services while developing internal replacements. The arrangement strengthens Google Cloud’s presence among consumer device makers and highlights the growing role of large model providers in mainstream products.
Risks and operational questions
Privacy and on-device processing
Apple emphasized that Apple Intelligence features will continue to use on-device processing and Private Cloud Compute where appropriate, but the technical split between on-device and cloud operations for more advanced features remains to be clarified. This will be a focal point for regulators and privacy advocates as the rollout progresses.
Product timing and compatibility
Apple has said the more personalized Siri will arrive later this year. Earlier delays suggest engineering and performance tradeoffs remain, especially for delivering complex capabilities across a wide install base of devices. How Apple stages features across hardware generations will affect adoption and user experience.
👉 Learn more about iOS 26.3 beta 2 updates

Editor’s Comments
Apple choosing to work with Google’s Gemini on Siri feels less like a surprise and more like a pragmatic reset. Apple is clearly prioritizing speed over pride. Building a competitive large model from scratch takes time, and Siri cannot afford to stay behind while user expectations around AI are being reshaped almost monthly.
What matters more than the model itself is where control still sits. Apple owns the system entry point, the default assistant, and the rules of integration. Gemini may power part of the intelligence, but Siri remains the interface that decides which apps surface, which actions are suggested, and which experiences stay invisible. That distinction is easy to overlook, but it is where long-term ecosystem power actually lives.
For developers and app marketers, this shift deserves close attention. If Siri becomes genuinely context-aware, discovery may happen before users ever open the App Store. That raises an uncomfortable question: in an AI-first interaction model, how many apps will users actively search for, and how many will simply be “chosen” by the system?
The more interesting debate is not whether Apple should have built its own model faster, but whether this move signals a future where assistants, not stores, become the primary distribution layer. If that shift happens, ASO may extend beyond keywords, creating new opportunities for apps that clearly communicate their value to both users and intelligent systems.

