Apple is set to access the user data at its disposal for a reason that may not sit well with its consumers. Apple is reportedly planning to analyze user data to improve its large language model (LLM) software while upholding user privacy.
“Only users who have opted-in to send Device Analytics information to Apple participate,” Apple said in the blog post on Monday. “The contents of the sampled emails never leave the device and are never shared with Apple. A participating device will send only a signal indicating which of the variants is closest to the sampled data on the device, and Apple learns which selected synthetic emails are most often selected across all devices.”
What is a Large Language Model (LLM)?
A Large Language Model (LLM) is an advanced type of artificial intelligence designed to understand and generate human language. It is trained on massive datasets, including books, websites, and other text sources, allowing it to learn patterns, grammar, and context. Using deep learning, particularly transformer architectures, LLMs can perform a wide range of language tasks such as translation, summarization, text completion, and answering questions.
READ: Intelligent assistant apps Google Now, Siri and Cortana are insensitive to emotions (March 15, 2016)
LLMs work by predicting the next word in a sequence, considering the words that came before. This makes them capable of generating coherent and contextually relevant responses. Popular examples include OpenAI’s GPT models, which can carry on conversations, write essays, generate code, and more.
Despite their impressive capabilities, LLMs don’t truly understand language like humans do—they rely on statistical patterns rather than reasoning or consciousness. However, their ability to mimic understanding makes them valuable tools in many applications, from education to business automation.
The company has reportedly been using synthetic data to train its artificial intelligence (AI) models but has found that method to be ineffective, Apple wrote in the blog post.
READ: OpenAI holds off on promise to creators, fails to protect intellectual property (January 3, 2025)
Apple also said that it will employ differential privacy to identify popular Genmoji prompts among users who opt into device analytics. By introducing randomized noise into data collection, the system will ensure that only commonly used prompts are recognized, safeguarding unique user inputs. This method prevents Apple from associating any data with specific devices or user accounts.
The company plans to extend privacy-preserving techniques to AI-driven features, including Image Playground, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence, ensuring consistent privacy protections across its ecosystem.


