Tech

The Dangers of Microsoft's Recall: Security Experts Sound the Alarm

AI written, human edited. 

Microsoft's recent announcement of its new "Recall" feature for Windows 11 has set off alarm bells among cybersecurity experts and privacy advocates. In the latest episode of the popular Security Now podcast, hosts Steve Gibson and Leo Laporte delved deep into this controversial feature's potential risks and implications.

Microsoft describes Recall as designed to capture and store everything a user types or views on their Windows 11 device, essentially creating a comprehensive timeline of their computer usage. While the company positions it as a productivity tool, allowing users to search and recall their past activities, the security concerns raised by Gibson and Laporte are hard to ignore.

At the core of the controversy lies the fact that Recall stores this data, including sensitive information like passwords and credit card numbers, locally on the user's machine in a new folder called "Core AI Platform." While Microsoft claims the data is encrypted at rest using BitLocker, the hosts point out that online backups are made of live, unencrypted data, creating potential avenues for exploitation.

Steve Gibson, expressed grave concerns about the implementation of Recall. He cited a security researcher named Kevin Beaumont, who detailed how "stealing everything you've ever typed or viewed on your own Windows PC is now possible with two lines of code." Beaumont's findings suggest that malware could easily access and exfiltrate the Recall database, exposing users to unprecedented privacy risks.

Leo Laporte echoed these concerns, pointing out that while the data may be encrypted at rest, once a user logs in, it becomes decrypted and accessible to any malware on the system. This negates Microsoft's claims of security and puts users' sensitive information at risk.

However, Gibson also offered a compelling theory about Microsoft's motivations for Recall. He suggested that the company might be using it as a first step towards creating a powerful, personalized large language model (LLM) for each user. By capturing and storing a user's entire corpus of computer usage, Microsoft could train a local LLM to serve as a personal AI assistant with intimate knowledge of the user's life and activities.

While the idea of a personalized AI assistant with perfect recall is undoubtedly intriguing, the security risks highlighted by Gibson and Laporte cannot be ignored. As Laporte pointed out, if implemented poorly, Recall could potentially "kill it in its tracks" and erode public trust in AI technologies due to privacy concerns.

The episode concluded with a call for Microsoft to address the security issues surrounding Recall and potentially recall (pun intended) the feature until it can be implemented with proper safeguards and user consent. As Gibson aptly stated, "Microsoft has not proven itself to be a trustworthy caretaker of such information."

In the rapidly evolving world of AI and personal computing, features like Recall highlight the delicate balance between innovation and security. As technology companies push the boundaries, it is crucial for security experts and advocates to raise their voices and ensure that privacy and user safety remain paramount considerations.

Become a subscriber and never miss an episode: Security Now

All Tech posts