Connect with us

Tech

Hiltzik: Chatbot content flooding publishers

Published

on

After 17 years as publisher of science fiction and fantasy short stories at Clarkesworld, The electronic and print magazine he founded in 2006Neil Clark has become adept at weeding out the occasional plagiarism from the hundreds of submissions he receives each month.

The problem worsened during the pandemic lockdown phase, when confined-at-home workers were looking for ways to boost income. But it exploded in December, after San Francisco-based OpenAI publicly released ChatGPT, a particularly sophisticated machine for generating prose.

Of the 1,200 applications he received this month, Clark believes, 500 were automated. He’s stopped accepting submissions until at least the end of the month, but he’s confident that if he hadn’t, the synthetic pile would have reached parity with legitimate submissions by then — “and more than likely passed,” he says.

This whole madness has taken over the world, and everyone is in a frenzy.

– H. Holden Thorp, science editor

Advertisement

“We saw no reason to think it was slowing down,” says Clark, 56. As he did with plagiarism, he rejected machine-generated content and permanently banned its providers.

A veteran of the software industry, Clarke has developed a general grammar that enables him to automatically recognize written prose. He doesn’t share them, fearing simple tweaks could allow users to circumvent his standards. “It will still be bad stories and it will still be rejected, but it will be ten times harder for us.”

Rarely have regulators begun to consider the legal implications of software that can scour the web for raw material, including the potential for widespread copyright infringement and forgery in prose and art.

The advent of authenticity “will make generative AI attractive to malicious use where the truth is less important than the message it conveys, such as online misinformation and harassment campaigns.” Alex Engler of the Brookings Institution wrote.

The onslaught of machine-written material has become Topic A in the periodical industry, because ChatGPT and similar programs known as chatbots have demonstrated their ability to produce prose that mimics human writing to an astonishing degree.

“Everyone talks about this, and a lot of people focus on what this will mean for science publishing,” says H. Holden Thorpe, editor-in-chief of the Science Journals Group. “This whole madness has taken over the world, and everyone is in a frenzy.”

Science has imposed the strictest policy on content generated by AI tools like chatbots between tech publishers: It’s forbidden, period.

Advertisement

Banning posts includes “text generated from artificial intelligence, machine learning, or similar algorithm tools” as well as “shapes, images, or accompanying graphics.” Artificial intelligence software cannot be listed as an author. Science warns that it will treat a violation of this policy as “scientific misconduct.”

Thorp further explained in an editorial that using programs such as ChatGPT would violate a basic science principle that authors must certify that their submitted work is “original”. This is “enough to indicate that text written by ChatGPT is unacceptable,” he wrote: “It is, after all, stolen from ChatGPT.”

This goes far beyond the rules set by nature, which are in the top of the most prestigious scientific journals along with science. Nature magazines specify that Language generation programs cannot be supported As an author on a paper, as has been experienced in some published papers: “Any attribution of authorship carries with it responsibility for the work,” Nature explains, “and AI tools cannot bear such responsibility.”

But Nature allows researchers to use such tools in preparing their papers, as long as they “document such use in the methods sections or acknowledgments of appreciation” or elsewhere in the text.

Science has chosen to take a firmer stance, Thorpe told me, to avoid repeating the difficulties that arose from the advent of Photoshop, which enables image manipulation, in the 1990s.

Advertisement

Science fiction and fantasy stories plagiarized or created by chatbots received by Neil Clarke’s Clarkesworld Magazine, by month. The huge increase in recent months has been submissions written by the chatbot, accounting for nearly 40% of all submissions in February.

(Neil Clark)

“In the beginning, people did a lot of things with Photoshop on their photos that we now think were unacceptable,” he says. “When it first came out, we didn’t have a lot of rules about that, but now people like to go back to the old papers and say, ‘Look what they did with Photoshop,’ we don’t want a repeat of that.”

Science journals will wait for the scientific community to unite around standards for acceptable use of AI software before revisiting their rules, Thorpe says: “We started with a very strict set of rules. It’s a lot easier to loosen up your guidelines later than it is to tighten them.”

Advertisement

However, underlying concerns about ChatGPT and its AI cousins ​​are what may be exaggerated impressions of how adept they are at replicating human thought.

What has been continually overlooked, in part because of the power of noise, is how poorly the intellectual processes that humans carry out naturally, and in general, almost flawlessly, are handled.

Naive media stories about “Shine” ChatGPT Focusing on these successes “without looking seriously at the range of mistakes,” he noted Computer scientists Gary Marcus and Ernest Davis.

Helpfully, they’ve compiled Database of hundreds of occasional soaring flops by ChatGPT, Microsoft’s Bing, and other language-generating software: for example, their inability to do simple arithmetic, count to five, understand the order of events in a narrative or understand that a person can have two parents—not to mention their tendency to Fill in the gaps in their knowledge with fabrications.

AI advocates maintain that these shortcomings will eventually be fixed, but when or even if this will happen is uncertain, since human thought processes themselves are not well understood.

Advertisement

The glorification of ChatGPT’s supposed skills ignores that the program and other chatbots essentially scour the Internet for found content — blogs, social media, old digitized books, personal discourses by the ignorant, learned discoveries by the wise — and use algorithms to correlate what they find. together. In a way that mimics the feeling, but it isn’t.

Human-generated content is their raw material, human designers “train” them on where to find content to respond to a query, and human-written algorithms are their set of instructions.

The result is like a magician’s trick. Chatbots appear human because each line of output is a reflection of human input.

The amazement that humans experience at the apparent sensibility of chatbots is not a new phenomenon. It evokes what Joseph Weisenbaum, the inventor of Elisa, a 1960s-era natural language program that could replicate a therapist’s responses to a “patient’s” complaints, noticed in the emotional reactions of users interacting with the program.

What I didn’t realize wrote later“Can very brief exposure to a relatively simple computer program induce powerful delusional thinking in completely normal people.”

Advertisement

It’s true that ChatGPT is good enough at imitation to fool even educated professionals at first glance. According to a team of researchers at the Universities of Chicago and Northwestern, four professional medical reviewers managed just that Correctly picking up 68% of summaries generated by ChatGPT of scientific papers published from a stack of 25 real abstracts generated by chat. (They also incorrectly identified 14% of the original abstracts as machine-typed.)

Nature notes that “ChatGPT can write great student essayssummarize research papers, answer questions well enough to pass medical exams and generate useful computer code.”

But these are all fairly general writing classes, and they’re usually imbued with rote language. “Artificial intelligence is not creativity, but iterative,” says John Scalzi, the author of dozens of sci-fi books and short stories who objected to his publisher, Tor Books,’s use of machine-generated art for the cover of a fellow author’s latest novel.

(Tor said published statement that he was unaware that the cover art “may have been AI-generated”, but said that due to production schedules he had no choice but to move forward with the cover once he knew the source. The publisher said it “defended the creators of SFF [that is, science fiction and fantasy] Since our inception, we will continue to do so. “)

On the potential for machine-generated content to continue to be improved, “I think it’s bound to get to a ‘good enough’ level,” says Scalzi. “I don’t think AI is It would create a still artwork akin to “Blood Streak” or “How Stella Got Her Groove Back”.

Advertisement

At the moment, undisclosed or poorly disclosed use of chatbots was treated as an occupational crime comparable to plagiarism. Two rectors at Vanderbilt University’s Peabody College of Education and Human Development have been suspended after the school released an email in response to the February 13 shooting at Michigan State University calling on the Vanderbilt community to “work together” in Creating a safe and inclusive environment on campus.

The email carried a notice in small print that it was a “refactor” from ChatGPT. The Michigan shooter killed three students and injured five.

To be fair, Vanderbilt’s ChatGPT email wasn’t all that different from what humans on the university staff might have presented themselves, an example of condolence “thoughts and prayers” in the wake of a public tragedy, which seems empty and robotic even when produced by beings of flesh and blood.

Chatbot products have yet to break through to elite levels of creative writing or art. Clark says the motive for submitting machine-written stories to his magazine does not appear to be aspirations for creative achievement but the pursuit of a quick buck.

“A lot is coming from “Side hustle” sites on the Internet.” “They are misled by people who promote these money-making schemes.”

Advertisement

Others fear that the ease of creating real-looking but machine-generated prose and images will make the technology a tool for making mistakes, much as cryptocurrencies have found their most reliable use cases in scam and ransomware attacks.

This is because these programs are not ethical entities but rather tools that respond to the motives of their users and depend on the source of the material towards which they are directed. Microsoft had to Tweak the new ChatGPT-powered Bing search enginefor example, when she responded strangely and annoyinglyand even insulting to the requests of some users.

ChatGPT’s production is theoretically constrained by ethical “firewalls” that its developers devised, but are easy to avoid – “nothing more than lipstick on an immoral pig,” as Marcus notes.

He adds, “We now have the most widely used chatbots in the world, … glorified by the media, yet with moral protection barriers that just kind of work.” “There is little, if any, government regulation in place to do much about it. The possibilities are endless now for advertising, phishing farms, and rings of fake websites that are undermining online trust. It’s a disaster in the making.”



Source link

Advertisement

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

I let the AI ​​pick my makeup for a week

Published

on

By

I Fine artist. Almost every aspect of my life is driven by a desire to create, no matter the medium — from DIY projects to Cosplay and elaborate facial makeupI am constantly making something new. I am always eager to try new technologies, tools and technology, so I am naturally fascinated by AI generators. While I am aware of the ongoing rhetoric surrounding AI art, incl Lawsuits and ethical discussions, my curiosity is much stronger than my apprehension about it.

That’s why I decided to let the AI ​​pick my makeup over the course of five days. For consistency, I used a A dream from Wombo The app to create all the themes featured below. (I also picked this app because there was a 200-character limit per prompt, and I loved the challenge of shorter prompts.) While I did my best to faithfully recreate the look in AI images, I took human liberties based on the supplies I had on hand. And my own hobbies. This is what I made with the help of a machine.



Source link

Advertisement
Continue Reading

Tech

Twitter will only put paid users on your feed

Published

on

By

This comes after a few days Twitter announced Those older verified accounts will lose their blue check mark starting April 1 unless they sign up for the paid Twitter Blue. At the same time, Twitter is working on a method for paid subscribers Hide blue checksprobably because it might seem awkward to have one if all it means is that you paid for it.

Together, both changes could get more subscribers (Twitter hopes), but also ensure that the For You page becomes a collection of shoppers, ramblers, and anyone else who wants to pay for Twitter. Oh, and the brands. By limiting amplification to only a small amount of paid users, it makes the For You page more open, and brands can get more traction and amplification in a free Tweet for paying for Blue than buying ads.

Normal, unpaid accounts are only supposed to be visible in the following feed, the time feed of only people you follow — basically, what Twitter used to be.



Source link

Advertisement

Continue Reading

Tech

We spoke to the man behind the viral photo of the Pope

Published

on

By

Over the weekend, a photo of Pope Francis looking dapper in a white puffer jacket went viral on social media. The 86-year-old seated pope appears to be suffering from some serious cataplexy. But there was just one problem: the photo wasn’t real. Created with Midjourney’s artificial intelligence technical tool.

As word spread across the internet that the image was created by artificial intelligence, many expressed their surprise. “I thought the pope’s puffer jacket was real and never thought about it again,” Chrissy Teigen chirp. “No way can I escape the future of technology.” Garbage Day newsletter writer and former BuzzFeed News correspondent Ryan Broderick invited him “The first real mass-level AI misinformation case,” it follows in the aftermath Fake photos of the arrest of Donald Trump by police in New York last week.

Now, for the first time, the image’s creator has shared the story of how he created the image that fooled the world.

Pablo Xavier, a 31-year-old construction worker from the Chicago area who declined to give his last name due to fears he would be attacked for taking the photos, said he was stumbling through dorm rooms last week when he came up with the idea for the photo.

Advertisement

“I try to figure out ways to make something funny because that’s what I usually try to do,” he told BuzzFeed News. “I try to do funny things or tripartite-psychedelic things. It just dawned on me: I have to do the Pope. Then it came like water: “The Pope in a fluffy Balenciaga coat, Moncler, walking the streets of Rome, Paris, things like that.”

He generated the first three images at around 2pm local time last Friday. (He first started using Midjourney after the death of one of his brothers in November. “It almost all started, just dealing with grief and taking pictures of my ex,” he said. “I fell in love with her after that.”)

When Pablo Xavier first saw the Pope’s photos, he said, “I thought they were perfect.” So he sent it to a Facebook group called AI Art Universe, and then on Reddit. He was shocked when the photos went viral. He said, “I didn’t want it to explode like that.”



Source link

Advertisement
Continue Reading

Trending