12.2 C
London
Monday, September 26, 2022

Twitter’s attempt to monetize porn has reportedly been halted due to child safety warnings – londonbusinessblog.com

Must read

Ian grows into a hurricane as Florida begins evacuations and Cuba braces for potential flooding

Ian strengthened into a hurricane Monday as Florida began ordering evacuations this week and preparing for potential flooding.Tornadoes are also possible late Monday and...

These are the industries ripe for innovation under the Inflation Reduction Act • londonbusinessblog.com

With a month In hindsight, we're getting a better idea of ​​what the Inflation Reduction Act will mean for the US economy and the...

Gently’s store aggregator aims to take the friction out of locating second-hand clothing • londonbusinessblog.com

Samuel Spitz is a used clothing enthusiast, but found that he spent hours searching dozens of resale sites to find certain items and came...

Limit reached – Join the EU Startups CLUB

€147/quarter This option is ideal for companies and investors who want to keep up to date with Europe's most promising startups, have full access...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Despite serving as the online water cooler for journalists, politicians and VCs, Twitter isn’t the most profitable social network around. Amid internal shocks and increasing pressure from investors to make more money, Twitter reportedly considered monetizing adult content.

According to an report of The Verge, Twitter was about to become a competitor to OnlyFans by allowing adult creators to sell subscriptions on the social media platform. That idea may sound strange at first, but it’s actually not that outlandish – some adult creators already rely on Twitter as a means of promoting their OnlyFans accounts, as Twitter is one of the few major platforms on which to post content. porn does not violate the guidelines.

But Twitter has apparently put this project on hold after a “red team” of 84 employees, designed to test the product for security flaws, discovered that Twitter did not contain child sexual abuse material (CSAM) and widespread non-consensual nudity. can detect. Twitter also had no tools to verify that creators and consumers of adult content were over the age of 18. According to the report, Twitter’s health team has been warning senior citizens about the platform’s CSAM issue since February 2021.

To detect such content, Twitter uses a Microsoft-developed database PhotoDNA, which allows platforms to quickly identify and remove known CSAM. But if a piece of CSAM isn’t already part of that database, newer or digitally altered images can escape detection.

“You see people saying, ‘Twitter is doing poorly,'” said Matthew Green, an associate professor at the Johns Hopkins Information Security Institute. “And then it turns out that Twitter uses the same PhotoDNA scanning technology as almost everyone else.”

Twitter’s annual revenue — about $5 billion in 2021 — is small compared to a company like Google, which made $257 billion in revenue last year. Google has the financial resources to develop more advanced technology to identify CSAM, but these machine learning-driven mechanisms are not foolproof. Meta also uses Google’s Content Safety API to detect CSAM.

“This new kind of experimental technology is not the industry standard,” explains Green.

In a recent case, a father noticed that his toddler’s genitals were swollen and painful, so he contacted his son’s doctor. Prior to a telemedicine appointment, the father sent pictures of his son’s infection to the doctor. Google’s content moderation systems marked these medical images as CSAM, barring the father from all of his Google accounts. Police were alerted and began investigating the father, but ironically they were unable to contact him as his Google Fi phone number was disconnected.

“These tools are powerful because they can find new things, but they’re also error-prone,” Green told londonbusinessblog.com. “Machine learning doesn’t know the difference between sending something to your doctor and actually sexually abusing children.”

While this type of technology is being used to protect children from exploitation, critics fear that the cost of this protection – mass surveillance and scanning of personal data – is too high. Apple planned to roll out its own CSAM detection technology called NeuralHash last year, but the product was scrapped after security experts and privacy advocates pointed out that the technology could be easily abused by government agencies.

“Systems like this could report on vulnerable minorities, including LGBT parents in locations where police and community members are not friendly to them,” wrote Joe Mullin, a policy analyst for the Electronic Frontier Foundation. a blog post. “Google’s system could falsely report parents to authorities in autocratic countries, or locations with corrupt police, where falsely accused parents cannot be guaranteed due process.”

This does not mean that social platforms cannot do more to protect children from exploitation. Until February, Twitter had no way for users to flag content with CSAM, meaning some of the website’s most harmful content could remain online for a long time after users report it. Last year two people sued Twitter for allegedly making a profit from videos recorded of them as teenage sex trafficking victims; the case is being referred to the US Ninth Circuit Court of Appeals. In this case, the plaintiffs claimed that Twitter did not remove the videos when informed about it. The videos garnered more than 167,000 views.

Twitter faces a tricky problem: the platform is large enough that it’s nearly impossible to detect all CSAM, but it’s not making enough money to invest in more robust protections. According to The Verge .’s report, Elon Musk’s potential takeover of Twitter will also affect the priorities of health and safety teams at the company. Last week, Twitter is said to have reorganized its health team to focus instead on identifying spam accounts — Musk has fervently claimed that Twitter is lying about the prevalence of bots on the platform, citing this as his reason for kicking the $44 billion deal. want to end.

“Everything Twitter does that is good or bad is now weighed in light of, ‘How does this affect the trial [with Musk]?” said Green. “Billions of dollars could be at stake.”

Twitter did not respond to londonbusinessblog.com’s request for comment.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Ian grows into a hurricane as Florida begins evacuations and Cuba braces for potential flooding

Ian strengthened into a hurricane Monday as Florida began ordering evacuations this week and preparing for potential flooding.Tornadoes are also possible late Monday and...

These are the industries ripe for innovation under the Inflation Reduction Act • londonbusinessblog.com

With a month In hindsight, we're getting a better idea of ​​what the Inflation Reduction Act will mean for the US economy and the...

Gently’s store aggregator aims to take the friction out of locating second-hand clothing • londonbusinessblog.com

Samuel Spitz is a used clothing enthusiast, but found that he spent hours searching dozens of resale sites to find certain items and came...

Limit reached – Join the EU Startups CLUB

€147/quarter This option is ideal for companies and investors who want to keep up to date with Europe's most promising startups, have full access...

The biggest names in quantum startups are part of a new government advisory group to make this happen

Leading startup founders in the quantum computing space have been brought in by the Federal Secretary of Industry and Science, Ed Husic, as part...