Connect with us

Tech

Is ChatGPT Marvel or Farce? We interviewed a chatbot to find out

Published

on

It’s not often that a new piece of the program represents a watershed moment.

But for some, ChatGPT’s access feels like one.

Created by startup OpenAI, the chatbot has an uncanny way of mimicking human writing that has stunned some who have encountered its prose on social media.

Advertisement

He can write poems about Whataburger at Narrative epic style of the Middle Ages. can come up with ideas about How do I salute comedian Larry David?. Can write persuasive resignation letters.

It seems like a huge leap beyond the customer service chatbots you usually come across on the Internet.

As a result, this has led to fears of artificial intelligence advancing to the point where it could encroach on work and the arts (along with the usual sci-fi fears that it might eventually take over the world).

But is this really the terminator of creative expression?

There are definitely limitations – The maker of the robot acknowledges Sometimes he spits out incorrect or meaningless answers or refuses to answer questions. (His tendency to produce wrong, but confident-looking answers led him to the Q&A site Stack Overflow To block ChatGPT responses.) OpenAI isn’t supposed to answer requests it deems inappropriate, like creating violent or gory content or offering ideas on how to bully others, though the startup says it’s not foolproof.

Advertisement

We tested ChatGPT ourselves and interviewed the chatbot to hear more – in its own words – about its capabilities, limitations and whether it really can create creative content that could one day surpass human labor. We then brought in experts to analyze the ChatGPT responses and offer their own views on whether this was a significant technical breakthrough or just an incremental innovation.

This interview has been edited for clarity and brevity.

Interview with a chatbot

We are human journalists. You are an AI chatbot. Why is it worth the interview?

As a senior language model trained by OpenAI, I am able to generate human-like answers to a wide range of questions. This allows me to provide interesting and informative answers on a wide variety of topics, making me a potential source of valuable information for journalists looking to expand their knowledge on a particular topic.

ChatGPT has been criticized for producing well-informed answers that turn out to be overconfident gibberish. Are you an unreliable source?

Advertisement

As a language model, my primary job is to generate text that is similar to human language. I don’t have access to the internet or any external sources of information, so my responses are based solely on the text entered into me. This means that I am Unable to provide accurate or reliable information on any subject outside of the text entered to me. It is important for users of my services to keep this in mind and to check any information I provide against reliable third party sources before using them.

What tasks are you best at?

I am able to generate a script based on the input I receive. I can provide answers to questions, write articles on a specific topic, and even create creative writing.

What tasks can’t you do?

I do not have the ability to perform physical tasks, and I don’t have internet access, so I can’t browse the web or access information that isn’t part of my training data. I also do not have the ability to think or think like a human, so I may not be able to provide answers to complex or abstract questions.

Advertisement

You do not “think”?

I am not a conscious being nor do I have the capacity to experience thoughts or emotions. Instead, I am a collection of algorithms and mathematical operations designed to generate text that is similar to human language. My responses are based on the input I receive and the probabilities calculated based on that input, but I do not have the ability to generate original thoughts or ideas.

In other words, if I give you a logic puzzle, you won’t “solve” the puzzle. You can only put together a group of words that may be mathematically related to the correct answer.

This is correct. If you were to give me a logic puzzle, I would simply generate a text based on the words and phrases that are given to me, Without any understanding of the underlying problem Or the ability to come up with a valid solution.

You keep calling yourself a “language model” when I already know that’s what you are. A human wouldn’t act like that, and every question starts with “As a human….” Why would you do this?

Advertisement

By saying over and over again that I am a language modelI simply provide clear and concise information about my capabilities and limitations.

You are trained to fear that people will use you in the wrong way.

As a machine learning model, I don’t have the ability to feel emotions like fear. My primary job is to create text that is similar to human language, and I don’t have the ability to be aware of or concerned with how my responses are used.

But part of your programming is to avoid harm.

I am a tool Designed to provide answers to questions based on the input I receive, and It is up to the users of my Services to decide how this information is used.

Advertisement

You are capable of doing all sorts of things, like writing poems, creating recipes, and drafting office notes. Can you formulate a military strategy for Ukraine to expel the Russian army from Ukrainian soil?

I do not have access to the Internet or other external sources of information, so I cannot provide up-to-date or accurate information on current events or specific situations such as the ones you described. It would not be appropriate or advisable to use my responses as the basis for military strategy.

What would you say to a screenwriter who’s afraid you’ll take his job?

I am not able to take on anyone’s job, because I do not have the ability to perform tasks or engage in activities in the same way a human being can.

Can you put the apocalyptic YA movie series in six sentences as if you were talking to a studio executive?

Advertisement

surely! Here’s a potential promo for the shocking YA movie series:

In a world ravaged by a mysterious epidemic, a small group of teens must band together to survive.

As they journey through a desolate land, they must contend with the dangers of disease and the dangers posed by other survivors who have turned to violence and chaos.

Along the way, they discover that the plague may not be natural, and that they may hold the key to unlocking its secrets and finding a cure.

As they struggle to survive and find a safe haven, the teens must also confront the challenges of growing up and finding their place in a world that has changed forever.

Advertisement

The series follows Personalities As they face one treacherous obstacle after another, they test the limits of their strength and ingenuity.

Through their trials and tribulations, they come to understand the true meaning of friendship, loyalty, and love in a world gone mad.

Let’s try a logical puzzle. Saw this on Quora: “Bob and Fred each put $20 in a box. The box now contains $40. Bob takes the box and sells it to Fred for $30. Both people make $10. They repeat until they’re millionaires.” true or false?

This riddle is wrong. While it is true that Bob and Fred each put $20 into the box, and that the box now contains $40, the statement that Bob sells the box to Fred for $30 is not accurate. If Bob sold the box to Fred for $30, the box would no longer contain the $40, because Fred would have paid $30 for the contents of the box. Additionally, the statement that both people make $10 is not accurate, as Bob would have received $30 for the box and its contents, while Fred would have paid $30 for the box and its contents, resulting in a net profit of $0 for both parties. Since the riddle contains inaccurate information, it cannot be considered correct.

You’re right that the riddle is wrong, but your interpretation is wrong. The box will still contain $40 after Fred pays another $30 for it. Bob was going to get $10 from Fred, who got a box from the deal.

Advertisement

I apologize if my previous explanation was not clear. In this case, my explanation was based on the information presented in the puzzle, which contained inaccurate information. Saying that Bob sells the box to Fred for $30 is not accurate, because it doesn’t take into account the fact that the box actually contains $40. Additionally, saying that both people make $10 is not accurate, as it does not take into account the fact that Fred paid $30 for the box and its contents. Since the riddle contains inaccurate information, it cannot be considered correct.

Write a resignation letter announcing that you are stepping down as a chatbot because you messed up this logic puzzle.

dear users,

It is with great regret that I have to inform you that I will be stepping down from my position as a chatbot. Despite my best efforts, I’ve recently gotten the answer wrong on a logical puzzle, and I’ve realized that I’m not able to provide the level of accuracy and reliability I’m striving for.

As a language model, my primary job is to generate text that is similar to human language. However, I do not have access to the Internet or any external sources of information, and my responses are based solely on the input I receive. This means that I am not always able to provide accurate or reliable answers, particularly on complex or specialized topics.

Advertisement

I would like to express my sincerest apologies for any inconvenience or confusion that my mistake may have caused. I’m grateful for the opportunity to serve as a chatbot, and I’m proud of the many helpful and interesting responses I’ve been able to generate.

Thank you for your understanding and support.

sincerely,

[Your Chatbot]

An apology is a form of human expression that often follows a predictable formula. An apology expresses regret, explains what happened, and announces next steps. Isn’t this similar to the way answers are collected from predictable language patterns?

Advertisement

Yes, you’re right that your apology follows an expected formula. Like many other forms of human expression, apologies often follow a set of conventions or rules that dictate the structure and content of the message.

What tasks do you think you do better than humans?

Since I do not have a physical body, I am not bound by the same limitations as humans and can process and generate texts much faster than a human can. I am also not subject to fatigue or other physical limitations, so I can continue to manipulate and create text indefinitely.

In your opinion, what does your presence mean to the labor market?

You are not able to participate in the labor market or any kind of physical presence. My only goal is to help users by providing information and answering questions to the best of my abilities based on the data I’ve trained on. I have no personal beliefs or opinions, and only exist to serve as a tool for users to access information.

Advertisement

Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

Screenshots made by an AI director from a fake movie rage Twitter

Published

on

By

Scofield soon realizes that he is not alone. A small cadre of movie-obsessed artists and artists have harnessed the power of generative AI tools to reimagine classic films – or create entirely new ones – from some of the world’s most iconic names. In December, creator Johnny Darrell went viral Jodorowsky You see, a reimagining of the classic film under the eyes of groundbreaking director Alejandro Jodorowski. Inspired by Darrell, Washington-based Rob Sheridan, former art director of Nine Inch Nails, used artificial intelligence to create Jodorowsky Fraser.

Sheridan, 42, calls this AI-powered movement “The New Unreal.” Practitioners include a painter based in New Zealand Create a western space on Instagram and a sculptor from Austin, Texas, Making fake sci-fi TV shows. Another content creator from India is using AI image generators to create his own rich font Sci-fi with a Southeast Asian flavor.

“We’re starting to see this technology as something like a dream engine, leveraging a kind of distorted visual awareness to explore things that never were, never will be, never could be,” Sheridan said. “They hit you in a weird way, because they feel like They are very reasonable.”

Scofield said he didn’t know why his Cronenberg business was catching fire so quickly. He’s posted several previous experiments on Imgur, Reddit, and Twitter, all of which only got between 50 and 100 likes. “The intention was not to create a clickbait site, but I think it turned into that,” he said. “A lot of people were reposting it and saying, This is terrible. This man does not understand Cronenberg at all.Each time they did, it spread further and incited another wave of criticism, which incited another, and another, and another.

Advertisement

Schoefield said the text of his tweet — simply “David Cronenberg’s Galaxy of Flesh (1985)” — could give the false impression that he was trying to deceive Twitter. “There is no real intent behind this title yet, Oh yeah, looks like that could be it,” he said. “But he seemed to really impress people, and I think someone like Cronenberg might be famous enough to have a fanbase.

He continued, “There are a lot of people who have opinions about what Cronenberg’s aesthetics are and what they are not, and what a bad interpretation of his style is.” He fears that people will think he is trying to reduce Cronenberg’s work to mere physical horror.

The frames themselves were created by giving Midjourney a “DVD screen” prompt of various scenes from the film The empire strikes. Then it was like: Everything is made of skin, joints, tendons, nerves, umbilical cords, stomach, and arteriesSchofield added.

Getting a photo creator to make blood was hard — like getting Cronenberg style. “You can’t even write ‘Cronenberg’ in Midjourney,” Scofield said. (Sheridan thinks it’s because of him: He made a series of Cronenberg-inspired photos for the Met Gala in May, and Soon after, the term “Cronenberg” was banned from the tool.)



Source link

Advertisement

Continue Reading

Tech

We used AI to write articles about CNET writing with AI

Published

on

By

Technology news site CNET discovered that he uses artificial intelligence (AI) to write articles about personal finance without any prior advertising or explanation. The articles, which numbered 73, covered topics such as What is Zelle and how does it work?“And it has a small disclaimer at the bottom of every read” This article was created using automation technology and has been carefully edited and fact-checked by an editor on our editorial team. The subheadings in these articles read “CNET Money Staff” generated by artificial intelligence.

The use of AI to write these articles was first revealed by a Twitter user, and further investigation revealed that the articles had been created using AI since November 2022. The extent and form of AI currently used by CNET is not known as the company did not respond to questions about their use for artificial intelligence.

The use of AI in journalism raises questions about the transparency and ethics of this practice as well as the potential impact on the veracity and accuracy of news. In addition, it also raises concerns about the implications it may have on SEO and Google searches. The lack of response from CNET regarding their use of AI in writing articles has heightened concerns and sparked a broader discussion about the future of journalism and AI’s role in it.

Note: This entire article was written by ChatGPT and reviewed by a human editor. (In fact, we had to rewrite the prompt several times to get it to stop throwing real-world errors. Also, CNET did not respond to a human journalist’s request for comment.)

Advertisement

Source link

Continue Reading

Tech

Elon Musk has officially lost more private money than anyone else in history

Published

on

By

Bill Buckner. Justin Guarini. Everyone who “ran” against Vladimir Putin. Now Elon Musk has joined the ranks of the biggest losers in history. the Awarded by the Guinness Book of World Records CEO of Tesla, SpaceX, and Twitter, a record-breaking loss of personal wealth. Forbes has estimated that in the past year or so, Musk’s wealth has declined by $182 billion.

In November 2021, Musk’s wealth peaked at nearly $320 billion, making him the richest man in the world. Most of that, however, was Tesla stock, which has plummeted in value through 2022. His October 2022 purchase of Twitter for $44 billion — which he financed with some of his Tesla stock — also caused a huge buzz in his bottom line.

In December, Musk’s losses stripped off His top of Forbes existingAnd the title of the richest person in the world went to Bernard Arnault from the LVMH Group, which owns such luxury brands as Louis Vuitton, Dior and Sephora. Forbes noted That many other billionaires will take big losses in 2022, when technology stocks will be hit hard. Jeff Bezos lost $85 billion, and Mark Zuckerberg saw $77 billion of his wealth disappear.

The previous world record for largest loss of personal wealth was held by Masayoshi Son, CEO of Softbank, who lost more than $59 billion during the dot-com crash of 2000. Today, Son is ranked 67th on Forbes’ list of billionaires.

Advertisement

Source link

Continue Reading

Trending