Connect with us

Tech

My 95-year-old grandfather has lost most of his hearing. AirPods and Live Listen Let’s have conversations again

Published

on

As he approached ninety, the ABBA world shrank. He spent his days reading and watching TV, listening to audio through a pair of oversized wireless headphones over his ears with the volume turned up to the max. He still wore his hearing aids, but as the condition of his ears worsened, the devices became less effective. Simple conversations were now superhuman efforts that ended in screaming at matches and frustration.

“Do you want dinner?”

“do you feel sleepy?”

“Can I have some tea?”

Advertisement

Phone calls were impossible—Abba had to put his phone on speakerphone, press it directly to his ear, and tell the person on the other end to shout as loudly as he could. In the end, “talking” to ABBA on the phone meant making a video call to him and smiling and waving at him.

When I visited him in the fall of 2022, I was wearing a pair of AirPods, and he pointed to my ear with a puzzled expression on his face.

“headphones!” Screamed. “I use it to listen to music!”

And then, I wondered if I could use it for something more substantial.

In 2018, Apple made Live listen, an iOS feature that allows iPhones and iPads to transmit audio from their microphones directly to compatible hearing aids, works with regular AirPods. I had no reason to use the feature myself, but now I’m curious. Can Live Listen help me have a conversation with my grandfather after all these years?

Advertisement

I took my AirPods out of my ear and put them into his body. I turned on Live Listen on my iPhone, held it close to my mouth, and spoke to it.

“Hi, can you hear me?”

Abba’s face smiled, and he nodded excitedly. “I can hear you! I can hear you!”

AirPods are not my favorite Apple product. I think it’s overpriced, and it doesn’t look great for what you pay. But it’s also true that no other wireless earbuds work seamlessly with iPhones, which is why they’re the default wireless earbuds for most people, myself included.

They are also an environmental hazard. Vice named AirPods “fossils of future capitalism”, destined for landfills once their tiny batteries, encased in hard plastic, wear out after two years. And I resent the fact that Apple got rid of headphone jacks that work so well and made people pay for something they used to get in the box for free.

Advertisement

But with Live Listen, the AirPods helped me reconnect with my grandfather in a way no other device could. I’m willing to step over my fears for that.

Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

A sci-fi magazine has been flooded with submissions written by AI

Published

on

By

He tried using AI detection tools, but found them lacking. (The detector released by OpenAI, the company behind ChatGPT, only works about 1 in 4 times.) Unfamiliar turns of phrase in submissions from authors based outside the US whose first language is not English can sometimes stumble such tools, he said. “There is an inherent bias in these reagents,” Clark said.

Clark believes that rapid advances in artificial intelligence over the next few years will render these detection tools completely ineffective. He said, “It will write AI at a level that you wouldn’t be able to detect against a normal human being.”

At least one person responsible for creating generative AI tools shares Clark’s concerns. Amit Gupta is the co-founder of Sudowrite, an AI tool for writers that helps with edits, generating plot ideas, and completing entire sentences and paragraphs. In an interview with BuzzFeed News, Gupta, who is also a science fiction author and has presented to Clarkesworld many times in the past, said what the magazine was going through was “terrible” and “really disappointing”.

He said that something like ChatGPT, which generates large blocks of text from scratch, would be a better tool for creating sci-fi submissions than Sudowrite, which is mostly used for stories that are already in the process of being written. He noted that Sudowrite limits the number of stories you can create with the tool in a single day. “But if I just come in and write like three stories every day, I don’t think we can stop this use case,” Gupta said. “This feels like a lot of a gray area between legitimate and illegitimate use.”

Advertisement

Clark called the entire field of generative AI “an ethical and legal gray area”.

Who owns this [submitted] He works? He asked. If I buy one, who am I going to pay? The person didn’t write it. The chatbot doesn’t own it. He also pointed out the lack of transparency in the data being trained on. “Look what’s going on in the art world,” he said, referring to a case in it Three of the artists sued Makers of popular AI image generators, claiming that the tools have been trained in their art without their permission.

But in the end, the real problem isn’t how good or bad the text the AI ​​tools generate, Clark said. The problem is their speed. He said, “We were buried.” “I never expected a bunch of side hustle teachers to scrap our submission system.” Meanwhile, he said, “The irony of being a science fiction magazine full of stories written by artificial intelligence is not lost in my mind.”

Source link

Advertisement
Continue Reading

Tech

Hiltzik: Chatbot content flooding publishers

Published

on

By

After 17 years as publisher of science fiction and fantasy short stories at Clarkesworld, The electronic and print magazine he founded in 2006Neil Clark has become adept at weeding out the occasional plagiarism from the hundreds of submissions he receives each month.

The problem worsened during the pandemic lockdown phase, when confined-at-home workers were looking for ways to boost income. But it exploded in December, after San Francisco-based OpenAI publicly released ChatGPT, a particularly sophisticated machine for generating prose.

Of the 1,200 applications he received this month, Clark believes, 500 were automated. He’s stopped accepting submissions until at least the end of the month, but he’s confident that if he hadn’t, the synthetic pile would have reached parity with legitimate submissions by then — “and more than likely passed,” he says.

This whole madness has taken over the world, and everyone is in a frenzy.

– H. Holden Thorp, science editor

Advertisement

“We saw no reason to think it was slowing down,” says Clark, 56. As he did with plagiarism, he rejected machine-generated content and permanently banned its providers.

A veteran of the software industry, Clarke has developed a general grammar that enables him to automatically recognize written prose. He doesn’t share them, fearing simple tweaks could allow users to circumvent his standards. “It will still be bad stories and it will still be rejected, but it will be ten times harder for us.”

Rarely have regulators begun to consider the legal implications of software that can scour the web for raw material, including the potential for widespread copyright infringement and forgery in prose and art.

The advent of authenticity “will make generative AI attractive to malicious use where the truth is less important than the message it conveys, such as online misinformation and harassment campaigns.” Alex Engler of the Brookings Institution wrote.

The onslaught of machine-written material has become Topic A in the periodical industry, because ChatGPT and similar programs known as chatbots have demonstrated their ability to produce prose that mimics human writing to an astonishing degree.

“Everyone talks about this, and a lot of people focus on what this will mean for science publishing,” says H. Holden Thorpe, editor-in-chief of the Science Journals Group. “This whole madness has taken over the world, and everyone is in a frenzy.”

Science has imposed the strictest policy on content generated by AI tools like chatbots between tech publishers: It’s forbidden, period.

Advertisement

Banning posts includes “text generated from artificial intelligence, machine learning, or similar algorithm tools” as well as “shapes, images, or accompanying graphics.” Artificial intelligence software cannot be listed as an author. Science warns that it will treat a violation of this policy as “scientific misconduct.”

Thorp further explained in an editorial that using programs such as ChatGPT would violate a basic science principle that authors must certify that their submitted work is “original”. This is “enough to indicate that text written by ChatGPT is unacceptable,” he wrote: “It is, after all, stolen from ChatGPT.”

This goes far beyond the rules set by nature, which are in the top of the most prestigious scientific journals along with science. Nature magazines specify that Language generation programs cannot be supported As an author on a paper, as has been experienced in some published papers: “Any attribution of authorship carries with it responsibility for the work,” Nature explains, “and AI tools cannot bear such responsibility.”

But Nature allows researchers to use such tools in preparing their papers, as long as they “document such use in the methods sections or acknowledgments of appreciation” or elsewhere in the text.

Science has chosen to take a firmer stance, Thorpe told me, to avoid repeating the difficulties that arose from the advent of Photoshop, which enables image manipulation, in the 1990s.

Advertisement

Science fiction and fantasy stories plagiarized or created by chatbots received by Neil Clarke’s Clarkesworld Magazine, by month. The huge increase in recent months has been submissions written by the chatbot, accounting for nearly 40% of all submissions in February.

(Neil Clark)

“In the beginning, people did a lot of things with Photoshop on their photos that we now think were unacceptable,” he says. “When it first came out, we didn’t have a lot of rules about that, but now people like to go back to the old papers and say, ‘Look what they did with Photoshop,’ we don’t want a repeat of that.”

Science journals will wait for the scientific community to unite around standards for acceptable use of AI software before revisiting their rules, Thorpe says: “We started with a very strict set of rules. It’s a lot easier to loosen up your guidelines later than it is to tighten them.”

Advertisement

However, underlying concerns about ChatGPT and its AI cousins ​​are what may be exaggerated impressions of how adept they are at replicating human thought.

What has been continually overlooked, in part because of the power of noise, is how poorly the intellectual processes that humans carry out naturally, and in general, almost flawlessly, are handled.

Naive media stories about “Shine” ChatGPT Focusing on these successes “without looking seriously at the range of mistakes,” he noted Computer scientists Gary Marcus and Ernest Davis.

Helpfully, they’ve compiled Database of hundreds of occasional soaring flops by ChatGPT, Microsoft’s Bing, and other language-generating software: for example, their inability to do simple arithmetic, count to five, understand the order of events in a narrative or understand that a person can have two parents—not to mention their tendency to Fill in the gaps in their knowledge with fabrications.

AI advocates maintain that these shortcomings will eventually be fixed, but when or even if this will happen is uncertain, since human thought processes themselves are not well understood.

Advertisement

The glorification of ChatGPT’s supposed skills ignores that the program and other chatbots essentially scour the Internet for found content — blogs, social media, old digitized books, personal discourses by the ignorant, learned discoveries by the wise — and use algorithms to correlate what they find. together. In a way that mimics the feeling, but it isn’t.

Human-generated content is their raw material, human designers “train” them on where to find content to respond to a query, and human-written algorithms are their set of instructions.

The result is like a magician’s trick. Chatbots appear human because each line of output is a reflection of human input.

The amazement that humans experience at the apparent sensibility of chatbots is not a new phenomenon. It evokes what Joseph Weisenbaum, the inventor of Elisa, a 1960s-era natural language program that could replicate a therapist’s responses to a “patient’s” complaints, noticed in the emotional reactions of users interacting with the program.

What I didn’t realize wrote later“Can very brief exposure to a relatively simple computer program induce powerful delusional thinking in completely normal people.”

Advertisement

It’s true that ChatGPT is good enough at imitation to fool even educated professionals at first glance. According to a team of researchers at the Universities of Chicago and Northwestern, four professional medical reviewers managed just that Correctly picking up 68% of summaries generated by ChatGPT of scientific papers published from a stack of 25 real abstracts generated by chat. (They also incorrectly identified 14% of the original abstracts as machine-typed.)

Nature notes that “ChatGPT can write great student essayssummarize research papers, answer questions well enough to pass medical exams and generate useful computer code.”

But these are all fairly general writing classes, and they’re usually imbued with rote language. “Artificial intelligence is not creativity, but iterative,” says John Scalzi, the author of dozens of sci-fi books and short stories who objected to his publisher, Tor Books,’s use of machine-generated art for the cover of a fellow author’s latest novel.

(Tor said published statement that he was unaware that the cover art “may have been AI-generated”, but said that due to production schedules he had no choice but to move forward with the cover once he knew the source. The publisher said it “defended the creators of SFF [that is, science fiction and fantasy] Since our inception, we will continue to do so. “)

On the potential for machine-generated content to continue to be improved, “I think it’s bound to get to a ‘good enough’ level,” says Scalzi. “I don’t think AI is It would create a still artwork akin to “Blood Streak” or “How Stella Got Her Groove Back”.

Advertisement

At the moment, undisclosed or poorly disclosed use of chatbots was treated as an occupational crime comparable to plagiarism. Two rectors at Vanderbilt University’s Peabody College of Education and Human Development have been suspended after the school released an email in response to the February 13 shooting at Michigan State University calling on the Vanderbilt community to “work together” in Creating a safe and inclusive environment on campus.

The email carried a notice in small print that it was a “refactor” from ChatGPT. The Michigan shooter killed three students and injured five.

To be fair, Vanderbilt’s ChatGPT email wasn’t all that different from what humans on the university staff might have presented themselves, an example of condolence “thoughts and prayers” in the wake of a public tragedy, which seems empty and robotic even when produced by beings of flesh and blood.

Chatbot products have yet to break through to elite levels of creative writing or art. Clark says the motive for submitting machine-written stories to his magazine does not appear to be aspirations for creative achievement but the pursuit of a quick buck.

“A lot is coming from “Side hustle” sites on the Internet.” “They are misled by people who promote these money-making schemes.”

Advertisement

Others fear that the ease of creating real-looking but machine-generated prose and images will make the technology a tool for making mistakes, much as cryptocurrencies have found their most reliable use cases in scam and ransomware attacks.

This is because these programs are not ethical entities but rather tools that respond to the motives of their users and depend on the source of the material towards which they are directed. Microsoft had to Tweak the new ChatGPT-powered Bing search enginefor example, when she responded strangely and annoyinglyand even insulting to the requests of some users.

ChatGPT’s production is theoretically constrained by ethical “firewalls” that its developers devised, but are easy to avoid – “nothing more than lipstick on an immoral pig,” as Marcus notes.

He adds, “We now have the most widely used chatbots in the world, … glorified by the media, yet with moral protection barriers that just kind of work.” “There is little, if any, government regulation in place to do much about it. The possibilities are endless now for advertising, phishing farms, and rings of fake websites that are undermining online trust. It’s a disaster in the making.”



Source link

Advertisement

Continue Reading

Tech

Two friends made a not-so-intelligent chatbot

Published

on

By

There are many reasons to panic about AI script generators like ChatGPT. Artificial intelligence may allow students to cheat during AP exams. It may give you poor information about how compound interest works, such as when Use CNET AI To write SEO friendly explanations about personal finance. She might try to seduce you and convince you to leave your wife, as the Bing chatbot did Technology columnist for The New York Times.

Artificial intelligence is smarter than us. not rest. He does not demand union-imposed breaks. Don’t ask for a raise. He is relentless and unstoppable. inevitable.

Or, you know, no. This is the case with 2dumb2destroyan AI chatbot trained on the most trivial things in the world: Pauly Shore movie quotes, dialogue from all seven people police Academy Homer Simpson movies and sayings and more.

Craig Shervin, 34, and Steve Nass, 33, two friends who met while working at an advertising agency in New York, wondered what would happen if they created an AI so unintelligent that it would have no fear of anyone.

Advertisement

Source link

Continue Reading

Trending