Home LATEST NEWS HIGH TECH Google AI confuses medical photos of children with child pornography

Google AI confuses medical photos of children with child pornography

39
0

A photo of a toddler’s groin infection on a father’s Android smartphone has taken an unexpected turn. According to the American daily New York TimesGoogle shut down this parent’s accounts, filed a report with the National Center for Missing and Exploited Children (NCMEC) in the United States, prompting a police investigation.

Another relative, called Mark by the New York Times, received the same treatment from Google. He had sent a photo of his child’s swollen genital area to a nurse, at her request, in February 2021, at the height of the COVID-19 pandemic, before a video consultation with a doctor.

The Californian giant sent him a notification two days after the events, telling him that his accounts had been locked for serious violation of Google policies due to content that could be illegal.

Mark lost access to his emails, contacts, photos and phone number (he was using Google Fi mobile service). His request to appeal to Google was denied.

A San Francisco police investigation was opened, and the investigator concluded that the incident did not meet the elements of a crime and that no crime had been committednote it NYT.

Useful AI, but not without shortcomings

It is precisely the nightmare that worries us allsaid Jon Callas, director of technology projects for the Electronic Frontier Foundation (EFF), which fights for the privacy of Internet users.

Read:  A resurgence of hand-foot-mouth disease in Ontario, experts say

The organization convinced Apple to step on the brakes following an announcement in 2021 of a plan to protect minors. The giant wanted to scan the images of the owners of its devices before they were uploaded to Apple’s iCloud, in order to compare them to the NCMEC database.

According to the EFF, this was a step back on user privacy.

In addition to privacy gaps, this story sheds light on the difficulty for an AI to distinguish a medical photo from potential abuse, if an image of a naked child is stored in the library. Internet user’s digital or on the cloud (clouds).

Like Google, social networks Facebook and Twitter, as well as the Reddit forum, use the same tool, Microsoft’s PhotoDNA, to detect potential abuse.

How it works: It does a hash match with the PhotoDNA tool to analyze uploaded images and detect matches to known child sexual abuse material.

In 2012, technology helped arrest a registered sex offender who was using Gmail to send images of a young girl.

Google has also had its own toolbox since 2018 that uses more advanced AI technology, going so far as to identify even illicit material never used before.

Requested by the New York TimesGoogle has indicated that it only scans the personal images of its users when positive measures were taken. Backing up photos to Google Photos, for example, could be considered as such. The company is required by US federal law to report potential abuse to NCMEC.

Child sexual abuse material is hateful, and we are committed to preventing its dissemination on our platforms. »

A quote from Christa Muldoon, spokesperson for Google.

Our team of child safety specialists review flagged content for accuracy and consult with pediatricians to ensure we are able to identify instances where users may seek medical advice.added in a press release the spokesperson for Google.

In 2021, Google reported 621,583 cases of child sexual abuse material to CyberTipLine, of the National Center for Missing and Exploited Children. Of these, 4,260 potential victims who have been reported to authorities, according to the New York Times.

Previous articleHigh-speed Internet for all: the CAQ affirms that its promise will be kept
Next articleNew images from The Last of Us TV series revealed