Connect with us

Tech

How to hold Big Tech responsible for content

Published

on

a Supreme Court case Selected for oral arguments On February 21st maybe web conversion as we know it.

The case was brought by the family of a woman who was killed in an Islamic State terrorist attack in Paris in 2015. The plaintiffs claimed that YouTube – owned by Google – was intentionally allowed Hundreds of extreme videos To be published, it was also alleged that YouTube recommended ISIS videos to users. Google argued that it was exempt in this case by Section 230 — the powerful 1996 legislation that shields web and social media companies from legal liability for content posted by users.

Google’s position has been upheld by a federal district court and U.S. Ninth Circuit Court of Appeals. The Supreme Court hearing the case indicates the justices’ interest in influencing the landmark law, which remains vital legislation to protect small and medium-sized businesses without large pockets or armies of lawyers to fend off countless lawsuits. It gives companies ample latitude to adjust their positions at their discretion without any liability, and most importantly, it enables startups to challenge established companies in the free market.

Advertisement

The 230th had drawn fire from both sides of the lane. President Biden He reiterated his call for law reform earlier this year. Democratic politicians, including Bidenyou generally want to fix or repeal Section 230 to force social media companies into more moderation. Republican politicians Including the former President Trump And senator. Mitch McConnell They called for its repeal to force social media companies to reduce moderation. The Supreme Court also hears hearing cases Defying laws in Texas and Florida that limit platforms’ ability to remove content or prevent them from banning politicians.

When Section 230 was enacted, the web was an entirely different place. Social media was in the womb. Today’s platforms have not yet massively spied, tracked, targeted and manipulated the online activity of their users. Today this business model is the golden goose for mainstream social media giants. Here’s the catch: Behemoths including Facebook, Instagram, Twitter, TikTok, and YouTube have abused Section 230 privileges. They hide behind this legislation’s shield of liability while targeting their users with content they didn’t request or search for.

Instead of getting rid of Section 230, we should reform it to allow free speech and support modestly funded startups while holding all companies accountable. Their liability shields should protect content that the web company plays no role in promoting or amplifying and moderation decisions that are specifically in line with the company’s terms of service.

But liability protections should be removed in four cases: content caused by the company’s algorithms to “trend” in front of users who otherwise wouldn’t have seen it; Content promoted via the Site’s paid ad targeting system; removed content that does not violate any of the site’s posting rules – for example, rules prohibiting targeted harassment, bullying, incitement to violence, spam, or polling – that were in effect on the day it was posted; and content that has been recommended or included in the user’s feed, algorithmically or manually by the site, and to which the user has not explicitly subscribed.

Sites can then choose: do they want to engage in the targeting and newsfeed manipulation of their users and thus be held responsible? Or do they simply want to provide a platform where users follow content from the friends, groups, and influencers they choose to connect with and watch? Algorithmic recommendations should become more transparent in this scenario. Sites will have to clearly identify what content has been boosted via their algorithms and obtain explicit permission from users to serve that content to them, giving users more control and transparency.

Advertisement

In addition, in line with Florida’s justification for its law could reach the Supreme CourtSection 230 should be amended to require locations.”be transparent about content moderation practices and giving users proper notice of changes to those policies.” Freedom of expression must be protected from the politically motivated whims of the site’s management team or staff.

It is also important to specify what the augmented content companies will not be liable for. For example, what happens if a social media company recommends a post about big waves and a kid sees the post, goes out surfing and drowns? Can his family sue the social network? The solution here is to make clear in updated 230 legislation that companies are liable for certain types of content they promote, such as defamation and incitement to violence, not just any content that precedes a shocking outcome.

Any broader changes to Section 230 would result in a complete loss of user privacy online. If web companies are responsible for any and all content on their platforms, they would have to scrutinize everything users post – Big Brother on steroids. Startups will struggle to afford oversight or legal fees.

If Section 230 were repealed, to avoid liability, web companies would either remotely censor any controversial content or take a hands-off approach and avoid moderation altogether. The first would be Orwellian nightmares devoid of free speech, while the second would mean puddles of unpalatable content. This is a lose-lose scenario.

The Supreme Court should uphold Section 230 to continue protecting freedom of expression and encouraging competition. Hence the task of Congress is to make subtle reforms. Hold companies accountable for clearly defined content that they actively participate in targeting, promoting, or censoring. At the same time, set rules to ensure that user privacy is protected and frivolous lawsuits are avoided. This is the best way forward – compromise.

Advertisement

Mark Weinstein He is the founder of the social network MeWe and is the author of a book on social media therapy, mental health, privacy, civil discourse, and democracy.

Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

Elon Musk is in the Twitter space about commenting on reporters

Published

on

By

hours after twitterpermanently suspendedMore than half a dozen media journalists including CNN, the New York Times, and the Washington Post following their reporting on an Elon Musk-related account, the same man appeared briefly in a Twitter space hosted by BuzzFeed News’ technology reporter Katie Notopoulos Thursday night.

“Everyone will be treated the same,” Musk, the Twitter chief executive, said in defense of the decision to suspend reporters. “They’re not special just because you’re a journalist.”

Shortly after journalists attempt to question him about the suspension, he escapes from space.

earlier in the eveningMusk falsely accused journalists of publishing his real-time location, which he referred to as “basically the assassination coordinates.” He said doing so was a “direct violation of Twitter’s terms of service”.

Advertisement

Journalists were covering the Twitter story ElonJet ban, an account that tweeted the whereabouts of Musk’s personal private jet using publicly available data, only to suddenly discover that their Twitter accounts were suspended. Twitter on Thursday blocked the personal account of Jack Sweeney, the Florida college student who runs @ElonJet, as well as the official account of mastodona Twitter competitor that tied to @ElonJet’s presence on its own platform.

Musk appeared in the space with the headline “#saveryanmac #macpack” (after former BuzzFeed News and current New York Times correspondent Ryan Mack, one of the suspended journalists), more than two hours after it began.



Source link

Advertisement
Continue Reading

Tech

Twitter goes viral after Elon Musk is questioned by reporters

Published

on

By

The billionaire has so far struggled to reconcile freedom of speech in proselytizing with his decisions to quash criticism of him on the platform.

When announcing his interest in buying the company earlier this year, Musk tweeted, “I hope even my worst critics stay on Twitter, because that’s what free speech is all about.” But his haphazard decisions since becoming CEO have scared off much-needed advertisers and alienated many users from the platform.

Recently, Musk was falsely accused @ElonJet account — who was banned Wednesday — from being questioned by publishing his real-time location. That evening, he said, the car his child was in had been followed by a “crazy chaser” who thought he was trailing Musk. (The Los Angeles Police Department said in a statement No police report has been filed on the alleged incident.)

“Any account that extracts real-time location information from anyone will be suspended, as it is a violation of physical integrity. This includes posting links to sites containing real-time location information,” Musk wrote on Twitter.

Advertisement

Then on Thursday, he made the same false accusation against the journalists who covered ElonJet as being banned from Twitter, and suspended them. who are correspondents chirp The LAPD statement has also been suspended.



Source link

Continue Reading

Tech

The illustrators are outraged by a child’s book created by artificial intelligence

Published

on

By

Amar Rishi, 28, has been fascinated by technology since he was a child. “I was always curious, and my dad let me play with his computer when I was 5,” he said. He grew up in Pakistan before his family moved to the UK, where Rishi studied computer science in London. A job at Palantir Technologies led Rishi to Palo Alto, California, and since 2020 he has been working for financial technology company Brex, where he is now Director of Design.

When a bunch of generative AI tools started hitting the market over the past few months, Reshi started messing around with them. Earlier this month, he had the idea to write a book for his best friend’s child, who was born this year, using artificial intelligence. “I said I’d take the weekend to try and get this out there,” he recalls.

First, I used Reshi ChatGPT to come up with a story about Alice, a young girl who wants to learn about the world of technology, and Sparkle, a cute robot who helps her. “It gave me a basis for a story,” said Rishi. “She was fine. She had her issues, of course. Then she started tweaking it.”

He asked ChatGPT to make Alice more curious and Sparkle more self-aware. Reshi then used the AI ​​app Midjourney to create the images he wanted. He said, “I just started putting in stimuli like ‘little girl’ and some descriptors: ‘blue eyes’, ‘simple dress’, ‘excited’ and ‘curious’. And that yielded some results. Now, let me tell you, some of those results It was absolutely fantastic. It would have been a horror book if you had put those early illustrations in it.”

Advertisement

He spent hours adjusting the prompts Midjourney was given, estimating that he had rejected “hundreds” of illustrations in order to obtain the 13 that filled the 14-page book. “I almost gave up because I was like, I don’t know if that is possiblebut then only paid in the end,” he said.

Source link

Continue Reading

Trending