MRI brain scan technology

Deepfake Porn And Brain Scans Make A Scary Sex Tech Future

Laws, Regulation

Tech giant Apple attracted some controversy last week with an announcement that it presumably expected to win some kudos for across the social media virtueverse (term just invented by myself). After all, few would disagree that rooting out illegal child porn images is anything but a good thing. However, some voices expressed a little disquiet at the way Apple were planning to do this, and the disquiet quickly turned into quite a chorus of criticism. Apple had announced that they would be using software to scan million’s of customer’s iPhones for illegal child abuse images, and they would be reporting them to the law enforcement if anything suspicious would be found. Although other big tech companies like Microsoft do check for illegal images on cloud based services, the controversy over Apple’s announcement is that they would be using an AI algorithm, and not just for images uploaded to the Apple cloud, but they would be scanning images on the phones themselves.

While a number of politicians on both sides of the Atlantic rushed to promote their ‘Save the Children’ voter credentials, other more objective and measured commentators had the courage to point out their concerns that this could represent a slippery slope of intrusion into our digital lives.

“No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this,” Snowden said on Twitter. “Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow. They turned a trillion dollars of devices into iNarcs—*without asking.*”

https://www.businessinsider.com/apple-iphone-scan-open-letter-child-abuse-plan-5k-signatures-2021-8

The Electronic Frontier Foundation, an organization devoted to digital rights and privacy, of which Snowden is both a President and board member, published an open letter/petition outlining their concerns. It quickly garnered over 5,000 signatures.

Meanwhile, the moral panic over ‘deepfake porn’ continues. This week concerns were raised over a rogue deepfake site that uses algorithms to convert any uploaded photo of a celebrity or non-celeb clothed woman into a deepfake nude. Prior to this, there have been a number of sites (including of course, the now banned subreddit r/deepfakes) that allowed visitors to submit pictures of clothed females for the site’s members to turn into nudes, either for free or increasingly now for payment. This is the first site that appears to be fully automated, although a couple of years ago there was outrage over a smartphone app – deepnude – that allowed anyone to upload a photo and convert it into a nude. New laws against deepfake porn are likely to be the result.

One can imagine that at some point. the Apple scanning software will also be used to detect illegal deepfake images on a person’s phone or computer. After all, if possession of deepfake images is made illegal, why wouldn’t it be just as valid for a company like Apple to scan for those illicit images too? I believe we may need to start discussing now, perhaps with some urgency, what the limits should be to such direct scanning of a person’s digital device for illegal material. This is because the digital world will soon become enmeshed with our ‘private’ mental worlds, and Apple will be at the forefront of that as much as any big tech company.

I doubt if any man, anywhere, has not at some point mentally undressed an attractive member of the opposite (or same) sex. Unless you are an extreme puritanical religious person, or a radical feminist, then it’s unlikely you would even consider the possibility that that should be illegal, or that is is even immoral. But what is the difference between this and somebody creating a deepfake nude, at least if he or she does not intend to share it with others – that it is strictly for private use? One might say that the physical act of uploading a person’s photo (without their consent) to a deepnude algorithm site or app is different to a private thought, in that the resulting nude image carries a risk of being spread online, even if the uploader does not intend it to be. For example, the deepsukbebe site claims that images uploaded are deleted within hours, but how can the uploader be certain of that? And by the site’s own admission, the images are online, or at least on the site’s servers, for a limited period. These are valid concerns, but we need to balance them with equally valid concerns regarding how any laws made to address them, may end up massively overreaching and potentially criminalizing far too many people.

Most readers are likely aware of Elon Musk’s ambitions to connect people’s brains directly to computers and the Internet, via interfaces. Although Musk’s idea appears to involve a perhaps unlikely invasive direct brain and computer interface implanted inside a person’s skull, a number of Big Tech companies are working on less intrusive ways to connect our minds to digital devices. For example, Facebook this year introduced a wristband product powered by EMG (Electromyography) which comes from the muscles in the wrist that are controlled by the brain. The hope is that it would enable wearer’s of Facebook’s forthcoming augmented reality glasses to control their spectacles by barely twitching their fingers. Non-intrusive brain reading itself, which has existed for as long as lie-detectors, are becoming ever more advanced, with MRI scans able to accurately identify the objects of mental images that an individual is thinking.

It could be that augmented reality (AR) glasses, which are expected by many to replace both smartphones and computers in the next decade, may pose the first issues for a society that by that time may have long accepted AI algorithmic invasions of digital privacy in the effort to combat illegal porn. Companies like Apple are betting that AR glasses (and then AR contact lenses) will over the next few years become mass adopted, and eventually worn almost ubiquitously by anyone who today owns a smartphone. Just as Facebook’s Oculus Quest 2 virtual reality headset makes money from harvesting the reams of data from consumer’s behavior in VR, so the potential data harvesting of AR glasses worn all day by billions is unimaginable. Combine this with an accepted use of algorithms to analyze that data for potential illegal behavior, and the distinction between mental and digital begins to break down. Your glasses will record details and patterns of whatever you are looking at while you wear them, whatever objects you particularly focus your gaze on. They may be fitted with biometric sensors or at least combine with biometric sensors in your other wearables. Suppose, all this data is used to show that you were focusing your gaze rather too long on a young woman in the metro, while your pupils were dilating and your heart rate was increasing etc.? Could this one day be used as evidence in court that you were undressing her in your mind? Perhaps the glasses have a zoom feature, and you used that to ‘enable’ your depraved act?

Of course my examples may be a little crude or even far fetched, but they may hopefully give some hint into how data harvesting, the blurring of the digital and the private and the mental, badly thought out ‘deepfake porn’ style laws, along with an acceptance of the right of government or big tech to intrusively ‘police’ the individual’s digital realm, may all be leading us into a potential sex tech future dystopia.