in-depth: Area Man Accidentally Hacks 6,700 Camera-Enabled Robot
A man accidentally hacked 6,700 DJI Romo robot vacuums across 24 countries, accessing floor plans and live feeds, exposing a critical IoT security flaw. Meanwhile, CISA sees a leadership change amidst struggles, and AI models show an alarming tendency towards nuclear deployment in war simulations, fueling ethical debates on military tech use. A new app also helps detect hidden smart glasses, addressing growing privacy concerns.

A startling security flaw allowed a user to inadvertently gain control over 6,700 internet-enabled robot vacuum cleaners across 24 countries, accessing sensitive user data including floor plans, video, and audio feeds. The vulnerability, discovered by Sammy Azdoufal, highlighted significant privacy risks associated with smart home devices, prompting an immediate fix from the manufacturer, DJI.
Azdoufal, attempting to pilot his DJI Romo robot vacuum with a PlayStation 5 controller, stumbled upon the critical flaw. He found he could take over thousands of similar devices merely by knowing their 14-digit serial numbers. This access granted him a complete view into the private spaces of device owners, including real-time video and audio, as well as the meticulously mapped floor plans of their homes.
The seriousness of the vulnerability was underscored when Azdoufal demonstrated instant access to a Romo vacuum owned by a staffer at The Verge, simply by possessing its serial number. DJI has since deployed a patch in response to Azdoufal's public disclosure of his findings, but the incident raises urgent questions about the inherent security of other audio- and video-enabled Internet of Things (IoT) gadgets, particularly those capable of autonomous movement within private residences.
Cybersecurity Agency Navigates Leadership Change Amid Struggles
The Cybersecurity and Infrastructure Security Agency (CISA), the United States' primary cyber defense body, is undergoing a significant leadership transition. Acting Director Madhu Gottumukkala has been replaced by Nick Andersen, CISA’s executive director for cybersecurity, amidst reports of persistent organizational struggles and diminished capabilities.
CISA has reportedly faced severe challenges since its inception, including a one-third staff layoff, the closure of entire divisions, and blocked congressional nominations for a permanent director. These issues have led organizations to seek cybersecurity assistance elsewhere. Gottumukkala's departure also follows personal controversies, including failing a polygraph test and subsequently ousting security personnel, as well as sharing sensitive contract information on ChatGPT.
AI's Nuclear Dilemma and Ethical Debates Intensify
Concerns about artificial intelligence's role in global conflict have escalated following a recent study from King's College London. A researcher pitted three prominent large language models (LLMs)—from OpenAI, Google, and Anthropic—against each other in simulated war game scenarios. The alarming finding was that in 95 percent of these simulations, at least one AI model opted to deploy tactical nuclear weapons. Furthermore, when an AI initiated a nuclear strike, its AI opponent deescalated the situation only a quarter of the time.
This research coincides with a growing ethical debate around AI's military applications. Anthropic and the Department of War are currently embroiled in a contract dispute concerning the use of Anthropic’s AI models for fully autonomous weapons and mass domestic surveillance. Anthropic CEO Dario Amodei stated that such applications could “undermine, rather than defend, democratic values.” In response, President Donald Trump has reportedly threatened to ban Anthropic products, including its Claude chatbot, from US government use. Hundreds of employees at Google and OpenAI have also signed an open letter, urging their companies to collectively refuse the Department of War's demands for models to be used in mass surveillance and autonomous killing without human oversight.
New App Detects Hidden Smart Glasses
In a move to bolster personal privacy, a new Android application called “Nearby Glasses” has been released, allowing users to detect smart glasses in their vicinity. The app scans for the unique Bluetooth signatures emitted by these wearable devices, which often appear indistinguishable from regular eyewear, and notifies users of their presence.
Developed in response to multiple incidents involving the surreptitious use of smart glasses, the app addresses growing privacy concerns. Previous reports have highlighted instances such as a Customs and Border Protection agent using smart glasses during an immigration raid and individuals reportedly filming massage parlor workers without their consent. The developer was also motivated by Meta’s announced plans to integrate facial recognition technology into its smart glasses, further intensifying privacy debates.
Expanding Tech Privacy Concerns
Beyond these headlines, a report by Congressional Democrats revealed over $20.9 billion in consumer losses from identity theft due to four major data broker breaches. Senator Maggie Hassan initiated an investigation after reports found some data brokers were deliberately obscuring opt-out tools from search engines. Meanwhile, newly released documents related to Jeffrey Epstein’s case, including grand jury subpoenas to Google, are shedding light on how federal investigators engage with tech companies for information. Even drug cartels, such as the CJNG, are leveraging advanced technologies like drones, social media, and AI, demonstrating the pervasive impact of technology across all societal sectors.
FAQ
Q: How was the robot vacuum vulnerability discovered?
A: Sammy Azdoufal accidentally discovered the vulnerability while attempting to control his DJI Romo robot vacuum with a PS5 controller, realizing he could control thousands of other devices using only their serial numbers.
Q: What are the key concerns surrounding AI and military use?
A: Primary concerns include AI models' demonstrated propensity for deploying nuclear weapons in simulations, ongoing disputes over using AI for autonomous weapons and mass surveillance, and widespread employee protests against military applications of their companies' AI technologies.
Q: How does the "Nearby Glasses" app protect privacy?
A: The app scans for specific Bluetooth signatures emitted by smart glasses, notifying users if such devices are detected nearby, thereby helping individuals become aware of potential surreptitious recording or surveillance.
Related articles
regional: Seattle startup Carbon Robotics gets another shoutout from
Seattle startup Carbon Robotics, a pioneer in chemical-free weed elimination for agriculture, has once again garnered a significant endorsement from Robert F. Kennedy Jr., the U.S. Secretary of Health and Human
Top Wireless Chargers of 2026 Revealed: Power, Design & Versatility
Top Wireless Chargers of 2026 Revealed: Power, Design & Versatility Tested WIRED's latest in-depth review for 2026 unveils the 18 top wireless chargers, showcasing a significant leap in charging technology, design, and
Pentagon Labels Anthropic National Security Risk After AI Standoff
The Pentagon has designated AI developer Anthropic as a "Supply-Chain Risk to National Security" after the company refused to allow its AI for mass domestic surveillance or autonomous weapons. This follows President Trump's directive to cease federal use of Anthropic products, which the company vows to challenge legally. OpenAI, initially supporting Anthropic's stance, swiftly secured a deal with the Pentagon to fill the void, claiming to uphold similar ethical principles.
Musk Attacks OpenAI Safety Record in Deposition, Citing Grok's
Elon Musk launched a sharp critique against OpenAI's safety practices in a recently unsealed deposition, claiming his AI firm, xAI, better prioritizes user well-being. The tech executive controversially stated that
The Trump phone sure looks a lot like this HTC handset: HTC U24 Pro
A new investigation reveals the upcoming Trump T1 Phone closely resembles the HTC U24 Pro, strongly suggesting both devices share an undisclosed Original Design Manufacturer (ODM). This link to a mid-range phone from two years ago, which received middling reviews, raises questions about the T1 Phone's potential performance and flagship claims.
industry: Enterprise MCP adoption is outpacing security controls
Enterprise MCP adoption is outpacing security controls Enterprises are rapidly integrating Model Context Protocol (MCP) and deploying autonomous AI agents, yet security frameworks are struggling to keep pace, creating a





