Connect with us

Tech

We used AI to write articles about CNET writing with AI

Published

on

Technology news site CNET discovered that he uses artificial intelligence (AI) to write articles about personal finance without any prior advertising or explanation. The articles, which numbered 73, covered topics such as What is Zelle and how does it work?“And it has a small disclaimer at the bottom of every read” This article was created using automation technology and has been carefully edited and fact-checked by an editor on our editorial team. The subheadings in these articles read “CNET Money Staff” generated by artificial intelligence.

The use of AI to write these articles was first revealed by a Twitter user, and further investigation revealed that the articles had been created using AI since November 2022. The extent and form of AI currently used by CNET is not known as the company did not respond to questions about their use for artificial intelligence.

The use of AI in journalism raises questions about the transparency and ethics of this practice as well as the potential impact on the veracity and accuracy of news. In addition, it also raises concerns about the implications it may have on SEO and Google searches. The lack of response from CNET regarding their use of AI in writing articles has heightened concerns and sparked a broader discussion about the future of journalism and AI’s role in it.

Note: This entire article was written by ChatGPT and reviewed by a human editor. (In fact, we had to rewrite the prompt several times to get it to stop throwing real-world errors. Also, CNET did not respond to a human journalist’s request for comment.)

Advertisement

Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

The latest TikTok trend is… flashing your boobs

Published

on

By

BuzzFeed News was able to find over 20 Foopah Challenge videos within an hour of being on the platform, only to have them featured more on the For You page because of this post. (BuzzFeed News will not link or embed any videos besides Andrews, as we cannot guarantee that all users participating in the challenge are of legal age.) Even today, upon opening the app, BuzzFeed News encountered Foopah’s challenge videos in four first-timers. Five videos.

It’s viral gold, combining the sexiness and feeling of having one on a giant tech platform with an easily replicable conception. Andrews faced the challenge when a TikTok manager told her of her existence. Soon, she produced a few more videos, driving traffic to OnlyFans. “I’ve been getting more traffic in the last couple of days just implementing these new TikToks versus normal trends,” she said.

TikTok moderates content by first running videos through an automated system that uses computer vision to see if they contain any violating content. its guidelines, which “does not allow nudity, pornography, or sexually explicit content on our platform”. Anything deemed suspicious is then screened by a human moderator, but moderators are expected to look at a thousand videos in one shift, which means they can’t examine the contents of a video in detail.

Plus, Andrews said, there’s no way to be sure the people in the videos actually blink. “Prove it,” she said. Some participants in the Foopah trend clearly use their elbow or thumb instead of the breast or nipple showing around the door. (Andrews actually stopped getting naked. “Yeah, it’s real,” she said, when asked if her videos show her flashing her breasts.)

Advertisement

“This is another example where the content moderation system is pitted against a younger fanbase of entrepreneurs,” said Liam McLoughlin, a lecturer at the University of Liverpool who studies content moderation. “These moderators are often given seconds to decide if content is against the rules, and from the Foopah examples I’ve seen, it took me minutes to locate the site. So even if content is flagged by a filter, human moderators may not be able to keep up with the content.”

The spread of Foopah’s challenge shows the power of TikTok’s For You page and the algorithms it uses. It features TikTok-unpunished videos from the word He goes “It can really go somewhere,” said Carolina Ari, an innovation fellow who studies the intersection between online abuse and censorship at Northumbria University in the UK. (Was it the same Victim of highly censored content moderation on TikTok.)

TikTok has blocked access to a number of hashtags used to post videos, but content using one hashtag, #foopahh_, has been viewed more than 7 million times overall, including 2 million views in the past week. Two-thirds of users who use the hashtag are between the ages of 18 and 24, according to TikTok’s own data.

About half of the more than 20 videos initially found by BuzzFeed News were deleted within 48 hours, with many of the accounts behind them being terminated. But more videos have appeared to take their place. A TikTok spokesperson told BuzzFeed News, “Nudity and sexually explicit content is not allowed on TikTok. We take appropriate action against any such content, including blocking offending hashtags and removing videos. We continue to invest extensively in our trust and safety processes.” “.

It is research into how social media platforms take a harsh approach to women’s bodies and how content moderation guidelines are often used by those who dislike or seek to control women. “One of the reasons this happened, and one of the reasons this weird shape started a trend, is because moderation in bodies on social media is known as Puritanism,” she said.

Advertisement

This is something Andrews, who has seen many of her TikTok accounts previously banned, agrees with. “You were banned without explanation,” she said. “There is no rhyme. without reason. Stupid. “

In addition to his concerns about the spread of explicit content to people who may not choose to consume it, McLoughlin worries about the long-term repercussions of this trend. “Other creators who don’t break the rules may find themselves subject to harsher regulations that target them directly,” he said. “I can certainly imagine those who talk about breastfeeding being targeted, for example.”

It’s something sex professionals on TikTok are worried about. Steve Oshiri, Canadian adult content creator, chirp that Foopah’s challenge was “bad sight for us” and would have a negative impact on adult creators’ ability to post work-safe content on TikTok in the future. “In the next couple of weeks, I expect to see a lot of accounts banned or guidelines updated,” Oshiri added.

others They were interested about potential legal ramifications for creators exposing themselves to minors on the app, given TikTok’s relatively young user base.

Will, who said her attitude is “I want boobs everywhere,” believes the controversy surrounding the challenge is more evidence of the double standards applied to women on social media. “Because we talk about bodies, especially women’s bodies,” it is said, Everyone’s kind of like, ‘Oh, well, bodies are harmful — isn’t anyone thinking about kids?’”

Advertisement



Source link

Continue Reading

Tech

Why did Andrew Tate choose to live in Romania?

Published

on

By

“Andrew is a controversial public figure,” she continued. In an apparent effort to show that Tate is prone to exaggeration, she added, “He said he had laser vision, that he didn’t sleep, and that tigers didn’t attack him because they had a pact.”

Petrescu also said that in one of Tate’s online videos, he said that rape in Romania was “a very serious crime” and that he “wanted to live in a country with strict rules against mistreatment of women”. (BuzzFeed News was unable to locate the video, and Petrescu did not provide a link when asked.)

But the reality on the ground in Romania is much different. In addition to the material poverty and lack of education that many women experience, factors such as the country’s geographic location (close to the Middle East and Western Europe), along with its membership in the European Union, facilitate travel and enable international and local trafficking.

The US State Department has been critical of Romania’s response to human trafficking in its annual reports on the issue. Last, published in July 2022, notes the authorities’ ongoing efforts to improve the quality of work, but concludes that “Romania remains a major source country for sex trafficking and labor trafficking victims in Europe”. According to the report, more than a third of identified sex trafficking victims are children. The report said that Romanian officials were being investigated on suspicion of involvement in smuggling.

Advertisement

This is made worse by the fact that investigative institutions are under-resourced and overwhelmed. A prosecutor from the Directorate for Investigation of Organized Crime and Terrorism, the same department investigating Tate, said one of the obstacles prosecutors faced in combating the growing phenomenon of human trafficking was the lack of specialized judicial police, similar to senior investigators, in the Romanian system.

“Human trafficking and modern slavery have become a national sport in Romania,” said the prosecutor, who spoke with BuzzFeed News on condition of anonymity because he has not been cleared of media interactions by his superiors. “We have prosecutors but not enough police officers assigned to our offices to handle cases.”

Meanwhile, the gangs, which often consist of dozens of members performing specific functions, make millions of dollars annually and have access to resources such as international safe houses, firearms, fraudulent travel documents, and expensive lawyers.

Aeolian Lorenz, a Romanian member of parliament who recently introduced a law aimed at strengthening the resources available to authorities against human traffickers, told BuzzFeed News that Tate’s case is not isolated.

“Tate saw a weak country and many vulnerable women and chose to come here,” Lorenz said, adding that he believes “misogyny plays a role” in Romania’s social and institutional attitude toward women trafficking. “Romania has not invested enough in protecting the victims,” ​​he added.

Advertisement

The arrests of the Tates and their alleged associates were the exception rather than the rule, according to local experts BuzzFeed News spoke to, and international politics may have been at play.

“The US embassy is very strong in Romania,” said Alexandru Gekou, a Bucharest criminal attorney, commenting on the fact that Tate’s investigation was sparked by a tip-off sent to the embassy in Bucharest regarding a young American woman who alleged she was being held against her will and assaulted by Tate.

“Maybe Tate thought he would be insulated from legal consequences” in Romania, said the unnamed plaintiff, adding that he believed the fact that an American woman had come forward with charges against Tate had embarrassed local authorities and prompted them to take faster action. .

“If it were Roman, I think it would have moved slower,” he added.

Source link

Advertisement

Continue Reading

Tech

The Supreme Court questions Section 230 of the legal shield

Published

on

By

Internet giants such as Google, Facebook, YouTube, and Twitter owe much of their success to the legal shield that was erected by Congress in 1996.

Known as Section 230, it has been called the base that launched Big Tech. Although it did not gain much attention at the time, it is now seen as one of the pillars of the massively open, global Internet we know today.

Advertisement

While newspapers and television stations can be held liable for any false and harmful content they publish or broadcast, online platforms are treated differently under Section 230.

Congress passed a special free speech rule to protect the new world of online communications. It said: “No provider or user of an interactive computer service will be treated as a publisher or spokesperson for any information provided by another information content provider.”

Law professor and author Jeff Kosev called Section 230 “The 26 Words That Created the Internet” because it allowed websites to develop freely as platforms for other people’s words, images, and videos.

The matter has gone unchallenged in the Supreme Court – until now.

This week, judges will hear two cases that may finally breach that legal shield and dramatically change the rules of the online game.

Advertisement

They are expected to hear a third case later this year related to Internet companies’ First Amendment rights amid government efforts to regulate them.

the The case is scheduled to be heard on Tuesday It began with a lawsuit filed by a California family against Google and YouTube for allegedly aiding and abetting an act of international terrorism. Their daughter, Naomi Gonzalez, was killed in Paris in November 2015 when Islamic State terrorists opened fire on a restaurant where the 23-year-old student was dining with two of his friends. It was part of an ISIS attack in the city that killed 129 people.

Their lawsuit alleged that Google, which owns YouTube, “knowingly allowed ISIS to publish hundreds of extremist videos inciting violence and recruiting potential supporters to join ISIS forces.” Furthermore, they claimed that YouTube “recommended endorsement of ISIS videos to users”.

The subject of contention on Tuesday is only their second claim. Can YouTube be sued for the algorithms it created to direct users to similar content – in this case allegedly directing potential terrorists to other ISIS videos? Or does Article 230 protect them from such allegations?

More than forty technology companies, internet scholars and free speech advocates filed amicus curiae briefings arguing that internet companies should not be held responsible for using computer programs that direct users to content they may find interesting.

Advertisement

“It’s the recommendation algorithms that make it possible to find the needles in humanity’s largest haystack,” said Washington attorney Lisa S. Platt, Google and YouTube representative. She warned that opening the door to lawsuits over algorithms “threatens to upend the modern internet”.

A federal judge dismissed the family’s lawsuit based on Section 230, and the divided Ninth Circuit Court of Appeals affirmed that decision in 2021.

Until this period, the Supreme Court had refused to hear appeals regarding the law. However, Justice Clarence Thomas has on several occasions called for the “curtailment of the blanket immunity that courts have read in Section 230”, particularly in cases where websites have been known to be spreading dangerous falsehoods or criminal schemes.

Some prominent liberals, including Ninth Circuit Court Justices Marsha Pierzon and Ronald Gould, have also called for reducing the scope of Section 230.

They are joined by advocates — liberal and conservative — who portray the Internet as a hotbed of misinformation and hate speech, home to stalkers and scammers and a contributor to teen suicides and mass shootings. Critics also say that social media companies are getting rich and keeping viewers online by amplifying the most extreme claims and the most angry voices.

Advertisement

Google and other tech companies were caught off guard in October when the Supreme Court voted for the first time to hear a direct challenge to Section 230 and determine whether websites like YouTube could be sued for their use of algorithms and targeted recommendations.

Their alarm grew in December when the Biden administration sided with plaintiffs in Gonzales v. Google and said YouTube could be sued over algorithms that “recommend” more videos to viewers.

The DOJ attorneys said the Ninth Circuit court made an error in dismissing the suit, and asked for a new understanding of Section 230. They agreed that the websites are protected from liability for displaying content provided by others, including ISIS videos, but they said they were. Not protected for their “behaviour” in recommending more videos to watch.

And they wrote in their file: “When YouTube presents a user with a video that you did not request to view, it implicitly tells the user that they will be interested in that content based on the video, account information, and its characteristics.”

Several experts in Internet law said they were baffled by the Supreme Court’s decision to hear the case and disturbed by what it might mean.

Advertisement

“The Internet needs regulation. We need to be able to find what we’re looking for,” said Eric Goldman, a professor of law at Santa Clara University. If websites can’t sort content based on algorithms, he said, “the network won’t be efficient internet.

Platt, a Google attorney, said, “YouTube does not endorse videos in the sense of endorsing them, any more than Google Search endorses search results. YouTube shows videos that may be more relevant to users.”

On Wednesday, the court will hear a related case, but one focused solely on whether Facebook, Google and Twitter can be sued for aiding international terrorists.

Congress in 2016 expanded the Anti-Terrorism Act to allow lawsuits to be brought by victims or survivors against anyone who “knowingly provided substantial assistance” to someone who committed an act of international terrorism.

The American family of a Jordanian national who was killed in an ISIS attack on the Reina nightclub in Istanbul in 2017 has sued Facebook, Twitter and YouTube, accusing them of aiding and abetting the killing. They said ISIS maintained public accounts on all three social media platforms and used them to recruit members.

Advertisement

The Ninth Circuit cleared the allegation moving forward, but the Department of Justice and social media companies said that was a mistake. They said the suit should be dismissed because the plaintiffs could not prove that the online platforms provided “significant assistance” to the terrorist who carried out the mass shooting.
It’s not entirely clear why the court agreed to hear the second case, Twitter vs. Tamna, but the justices may have decided they faced two questions: Can a social media site be prosecuted for aiding terrorists? And if so, can it be held responsible for directing viewers to ISIS videos?

It’s unclear whether the justices will split along the usual ideological lines when it comes to debating Section 230, which includes liberals and conservatives on both sides.

The biggest question still pending before the court may be: Can states regulate the internet and penalize social media companies for what they post or remove from their sites?

That clash began with a sharp partisan tone. Republican leaders in Texas and Florida adopted laws two years ago that allow fines and damages suits against Facebook, Twitter and other large social media sites if they “censor” or discriminate against conservatives. In signing the measure, Florida Governor Ron DeSantis said the law was intended to “protect against Silicon Valley elites”.

Before the laws became effective, they were challenged on free speech grounds and suspended based on the First Amendment, not Section 230.

Advertisement

The justices will almost certainly agree to review one or both of the laws because appeals court judges, both appointed by President Trump, have been divided on a key constitutional issue.

Judge Kevin Newsom of the Eleventh Circuit Court in Atlanta blocked most of the Florida law from going into effect. He said the First Amendment “restricts government agencies and protects private agencies.” Social media sites are private companies, “and, with minor exceptions, the government simply cannot tell a private person or entity what or how to say it.”

Shortly thereafter, Judge Andrew Oldham of the Fifth Circuit Court in New Orleans upheld the Texas law because the state sought to protect Texas’ free speech rights. Oldham, a former counsel to Texas Governor Greg Abbott and law clerk to Judge Samuel Alito Jr., said it was a “rather strange inversion on the First Amendment” to say that social media platforms have “the right to silence speech. … We reject the idea that corporations First Amendment right freely to censor what people say.”

Last month, the Supreme Court asked the Department of Justice to consider the issue, and that will cause cases to be delayed until the fall.

If, as expected, the U.S. Attorney’s office presents its view of the issue by June, the justices will likely schedule one or both cases for a hearing in the fall.

Advertisement

Source link

Continue Reading

Trending