Roomba vacuum cleaners collected sensitive images such as one showing a woman sitting on a toilet seat, and they were reportedly shared among people tasked with training the Amazon handles sensitive customer data in a rather irresponsible fashion.

According to documents seen by MIT Technology Review, images taken by the Roomba robotic vacuum cleaners were shared among contractors tasked with labeling them, on Discord and Facebook, among other places. iRobot claims that the leaked images were captured using test hardware, for which volunteers agreed to contribute audiovisual data. This training data captured by the cleaning robot’s onboard cameras was collected by iRobot and then sent to a startup named Scale AI, which employs contract workers to label them for the AI to understand.

Related: H&R Block And Other Tax Prep Companies Shared Filers’ Info With Facebook

Beware Of Your Friendly Robo Cleaner

A Roomba model J7+ robotic vacuum cleaner moving around in a house.
iRobot

The process of annotating data for an AI is usually outsourced to low-wage workers in Asia and Africa through contractors. Notably, none of this was consumer-grade hardware that is sold commercially. iRobot says that by allowing the dissemination of sensitive media, Scale AI violated its , and that the two companies are cutting their professional ties in the wake of the incident. Scale AI also itted that its data labeling employees broke its code of conduct.

The soon-to-be-an-Amazon company also claims that these “special development robots with hardware and software modifications” also come with agreements for volunteers, which explicitly told them that their data was being sent for training purposes. Plus, they were advised to avoid sensitive subjects and children from areas where the robots move around to capture photos, videos, and audio clips. What is alarming is that 95 percent of iRobot’s data is harvested from real homes, with most of the volunteers being employees or paid homes recruited by contractors. Only a small share of that AI training data is collected from artificial homes created as models in a test set.

The leaked images, of which MIT Technology Review saw 15, were collected in 2020 from homes based in the United States, Japan, , and among other countries. iRobot claims that all identifying information on a person is removed from the training data. It automatically deletes the entire log if any of the images appear to be sensitive in nature, such as those depicting private moments or any state of nudity. However, it is unclear whether such images are deleted automatically, or if a human sees them first and then decides to take the necessary action.

A rather concerning takeaway from the iRobot saga is that these leaks happened during the test phase of an in-development cleaning robot where volunteers are told what they are getting into. For commercially available devices like the Roomba, a majority of people don’t even bother reading the data collection in detail and may end up unwittingly handing over a ton of personally identifiable data to corporations with lax data security measures.

More: Uber Eats' Delivery Robots Are Coming To Miami

Source: MIT Technology Review