7.9 C
London
Tuesday, January 31, 2023

Twitter’s attempt to monetize porn has reportedly been halted due to child safety warnings – londonbusinessblog.com

Must read

Ira Peskowitz, father of Bhad Bhabie; His relationship with his daughter

Ira Peskowitz is a deputy sheriff of the Palm Beach County Sheriff's Office. He is best known for his estranged relationship with his...

Inflation in India is expected to fall to 5 pc. in 2023 and 4 pc. in 2024: IMF

Inflation inside India is expected to fall from 6.8 percent in the current fiscal year ending March 31 to 5 percent in the next...

3 Memphis EMTs Fired For Their Response To Police Fatal Beating Of Tire Nichols

Three EMTs who responded to Tire Nichols' fatal beating were fired Monday following an internal investigation, the Memphis Fire Department said Monday.Robert Long, JaMichael...

Would Baidu’s answer to ChatGPT make a difference? • londonbusinessblog.com

Baidu, China's largest search engine provider and robotaxi developer, is apparently working on its own ChatGPT counterpart. The news, first reported by Bloomberg...
Shreya Christinahttps://londonbusinessblog.com
Shreya has been with londonbusinessblog.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider londonbusinessblog.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Despite serving as the online water cooler for journalists, politicians and VCs, Twitter isn’t the most profitable social network around. Amid internal shocks and increasing pressure from investors to make more money, Twitter reportedly considered monetizing adult content.

According to an report of The Verge, Twitter was about to become a competitor to OnlyFans by allowing adult creators to sell subscriptions on the social media platform. That idea may sound strange at first, but it’s actually not that outlandish – some adult creators already rely on Twitter as a means of promoting their OnlyFans accounts, as Twitter is one of the few major platforms on which to post content. porn does not violate the guidelines.

But Twitter has apparently put this project on hold after a “red team” of 84 employees, designed to test the product for security flaws, discovered that Twitter did not contain child sexual abuse material (CSAM) and widespread non-consensual nudity. can detect. Twitter also had no tools to verify that creators and consumers of adult content were over the age of 18. According to the report, Twitter’s health team has been warning senior citizens about the platform’s CSAM issue since February 2021.

To detect such content, Twitter uses a Microsoft-developed database PhotoDNA, which allows platforms to quickly identify and remove known CSAM. But if a piece of CSAM isn’t already part of that database, newer or digitally altered images can escape detection.

“You see people saying, ‘Twitter is doing poorly,'” said Matthew Green, an associate professor at the Johns Hopkins Information Security Institute. “And then it turns out that Twitter uses the same PhotoDNA scanning technology as almost everyone else.”

Twitter’s annual revenue — about $5 billion in 2021 — is small compared to a company like Google, which made $257 billion in revenue last year. Google has the financial resources to develop more advanced technology to identify CSAM, but these machine learning-driven mechanisms are not foolproof. Meta also uses Google’s Content Safety API to detect CSAM.

“This new kind of experimental technology is not the industry standard,” explains Green.

In a recent case, a father noticed that his toddler’s genitals were swollen and painful, so he contacted his son’s doctor. Prior to a telemedicine appointment, the father sent pictures of his son’s infection to the doctor. Google’s content moderation systems marked these medical images as CSAM, barring the father from all of his Google accounts. Police were alerted and began investigating the father, but ironically they were unable to contact him as his Google Fi phone number was disconnected.

“These tools are powerful because they can find new things, but they’re also error-prone,” Green told londonbusinessblog.com. “Machine learning doesn’t know the difference between sending something to your doctor and actually sexually abusing children.”

While this type of technology is being used to protect children from exploitation, critics fear that the cost of this protection – mass surveillance and scanning of personal data – is too high. Apple planned to roll out its own CSAM detection technology called NeuralHash last year, but the product was scrapped after security experts and privacy advocates pointed out that the technology could be easily abused by government agencies.

“Systems like this could report on vulnerable minorities, including LGBT parents in locations where police and community members are not friendly to them,” wrote Joe Mullin, a policy analyst for the Electronic Frontier Foundation. a blog post. “Google’s system could falsely report parents to authorities in autocratic countries, or locations with corrupt police, where falsely accused parents cannot be guaranteed due process.”

This does not mean that social platforms cannot do more to protect children from exploitation. Until February, Twitter had no way for users to flag content with CSAM, meaning some of the website’s most harmful content could remain online for a long time after users report it. Last year two people sued Twitter for allegedly making a profit from videos recorded of them as teenage sex trafficking victims; the case is being referred to the US Ninth Circuit Court of Appeals. In this case, the plaintiffs claimed that Twitter did not remove the videos when informed about it. The videos garnered more than 167,000 views.

Twitter faces a tricky problem: the platform is large enough that it’s nearly impossible to detect all CSAM, but it’s not making enough money to invest in more robust protections. According to The Verge .’s report, Elon Musk’s potential takeover of Twitter will also affect the priorities of health and safety teams at the company. Last week, Twitter is said to have reorganized its health team to focus instead on identifying spam accounts — Musk has fervently claimed that Twitter is lying about the prevalence of bots on the platform, citing this as his reason for kicking the $44 billion deal. want to end.

“Everything Twitter does that is good or bad is now weighed in light of, ‘How does this affect the trial [with Musk]?” said Green. “Billions of dollars could be at stake.”

Twitter did not respond to londonbusinessblog.com’s request for comment.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Ira Peskowitz, father of Bhad Bhabie; His relationship with his daughter

Ira Peskowitz is a deputy sheriff of the Palm Beach County Sheriff's Office. He is best known for his estranged relationship with his...

Inflation in India is expected to fall to 5 pc. in 2023 and 4 pc. in 2024: IMF

Inflation inside India is expected to fall from 6.8 percent in the current fiscal year ending March 31 to 5 percent in the next...

3 Memphis EMTs Fired For Their Response To Police Fatal Beating Of Tire Nichols

Three EMTs who responded to Tire Nichols' fatal beating were fired Monday following an internal investigation, the Memphis Fire Department said Monday.Robert Long, JaMichael...

Would Baidu’s answer to ChatGPT make a difference? • londonbusinessblog.com

Baidu, China's largest search engine provider and robotaxi developer, is apparently working on its own ChatGPT counterpart. The news, first reported by Bloomberg...

The acquisition of InstaDeep is a classic case of an African startup gone global • londonbusinessblog.com

In January, Germany Leading vaccine maker BioNTech has announced it has agreed to acquire Tunisia-born and London-based AI startup InstaDeep for up to £562...