Connect with us

Tech

Hiltzik: Elon Musk is going down the right rabbit hole

Published

on

In the early days after his October 27 takeover of Twitter, Elon Musk alarmed platform users and advertisers with a series of tweets that linked him to far-right memes.

These included a gruesome conspiracy claim about the violent assault of House Speaker Nancy Pelosi’s husband, Paul, and an accusation that “activist groups” had pressured advertisers to give up Twitter.

The former president called on Trump, who was banned from Twitter for supporting the January 6, 2021 insurrection, to return to the platform.

“Far left San Francisco/Berkeley Views of the World posted via Twitter…. No more thumbs on the scale!”

– Elon Musk

Advertisement

Although Musk deleted some of his most inflammatory tweets, including his apparent endorsement of a conspiracy theory regarding the attack on Paul Pelosi, they have become the most popular indicators of the platform’s right-wing swing.

However, Musk’s latest tweet is even more disturbing. He has openly and approvingly engaged with some of the most extreme far-right figures on the Internet, including vocal advocates of misogyny and white supremacy.

He explicitly bought into the right-wing Republican’s attack on “wokeness,” a fabricated complaint that is the GOP’s way of demonizing diversity and inclusion.

Musk’s latest tweets embody Twitter’s least attractive features as a social media platform: crudeness, militancy, a paranoid view of progressive or liberal politics, and a tendency to amplify the most extreme views and try to make them appear mainstream.

Prior to taking office, Twitter employees had struggled to rein in these manifestations, with varying success. Part of that process was blocking or suspending accounts related to hateful tweets and the use of anti-Semitic, racist and anti-Nazi speech and images.

By eliminating Twitter’s traffic moderation team and granting a “general amnesty” to previously suspended accounts, as he announced would be done starting this week, Musk risks making the site less useful and inviting the vast majority of users.

Musk has continued to stress that his goal is to facilitate “free speech” on the platform — in fact, to invigorate the marketplace for ideas, the sunlight that has traditionally been seen as a purgatory against the falsehood and corruption of American society, such as Developed by Louis de Brandeis in 1913.

Advertisement

In practice, however, Musk has paid little more than a lip service to the concept. On Sunday, for example, he tweeted a call for “people of different political or other opinions Engage in civil debate on Twitter.”

Just an hour earlier, Musk attacked former Army Lt. Col. Alexander Vindman, who is Jewish, using an age-old anti-Semitic slur that portrays Jews as string-pullers wielding covert influence over society and as tools of the Israeli government. Vindman Both a puppeteer and a puppeteertweeted Musk. “The question is who is pulling the strings…?”

In recent days, Musk has been “increasingly promoting far-right theories and white supremacist content,” Josh Marshall noted on his Talking Points Memowhere he compiled some of the most striking examples.

Musk agreed with the idea that Twitter had suppressed conservative views.

(Twitter)

Advertisement

The subtext of many of Musk’s tweets may be undetectable at first glance to the average reader, so it helps provide perspective.

On Thanksgiving Day, he tacitly agreed to praise a white supremacist on Twitter who “is reportedly attacking children’s accounts,” thus “wiping out a lot of Antifa Twitter.” That tweet equated pedophilia with Antifa, the decentralized movement targeting fascism.

Musk responded, “Removing child exploitation is Priority #1” and asked the tweeter to notify him “if you see anything Twitter needs to address.”

The original message came from Paul Ray Ramsey, who is tweeting under the heading @ramzpaul. Such as It has been documented by the Southern Poverty Law CenterRamsay, a white nationalist, called women’s suffrage a “cancer” and questioned the historical truth of the Holocaust.

Advertisement

The day before, Musk had tacitly endorsed a tweet by hacker Kim.com accusing the Biden administration of advancing an immigration policy that was the Democratic Party’s “voter farming strategy”—that is, increasing the party’s voting base to “preserve power” by legalizing immigrants.

Musk responded, “The behavior follows impulses of political power.” “Kim Dotcom” is the alias of German-born Kim Schmitz, an accused hacker who was in New Zealand. He fights extradition to the United States for years.

In response to a tweet depicting “Woke Propoganda” [sic] As a Trojan horse led by “Wake Up Teachers” to attack the “child’s brain” [sic]And Musk responded on Twitter, “Exactly.”

Musk appears to have fully agreed with the right-wing view that Twitter, as a public company, has systematically suppressed conservatives and promoted progressive accounts.

“It was really bad.” He tweeted on November 23. “Far left San Francisco/Berkeley Views of the World were posted via Twitter. I’m sure this came as no surprise to anyone watching closely. Twitter is moving quickly to create a level playing field. No more thumbs up on the scale!”

Advertisement

Two days later, he tweeted:The Awakened Mind Virus It completely penetrated entertainment and pushed civilization towards suicide. There has to be a counter-narrative.”

What might explain Musk’s public drift to the far right is not clear. Marshall has gathered some mainstream speculation, his upbringing in apartheid-era South Africa and his connections with right-wing Silicon Valley investor Peter Thiel.

The underlying theme of these conjectures is that Musk didn’t move right, but for some reason today feels able to express old opinions more openly.

It’s also conceivable that Musk craved validation and found a nice, warm, wet place among the extremists who came to see him as their hero. John P. Moore of Cornell University College of Medicine Tell me a few weeks agoif you are embraced by marginalized people and “have that psychological need for some kind of affirmation from the people you interact with, it must be very tempting.”

Whatever the explanation, Musk’s actions are unlikely to be disastrous for Twitter.

Advertisement

His mercurial temperament, especially his apparent embrace of right-wing tropes, racism and outright anti-Semitism, has alarmed advertisers that Twitter must rely on for revenue to survive. It’s the rare consumer company that would risk having its Twitter ads appear anywhere near today’s racist, anti-Semitic, or anti-Semitic content.

At least 50 of the top Twitter advertisers in the pre-Musk era have stopped or pulled their ads from Twitter, according to a survey by the progressive organization. The media is important to America. Most have quietly pulled out, but some have either publicly announced the suspension of their campaigns or have been reliably reported to have done so, including Ford, Jeep, Chipotle and Merck.

Musk initially tried to mollify fickle advertisers by assuring them he would not allow Twitter to become a “free-for-all” under his watch, and pledged to create an independent review board to make rulings on banning or suspending spam accounts.

He has since backtracked, reportedly calling on corporate executives to reprimand them for their insubordination and personally admitting to returning to the platform, including Trump and right-wing Rep. Marjorie Taylor Greene (R-GA).

In a November 22 tweet, Musk confirmed that he had reneged on his promise to create a moderation council because “a large coalition of political/social activist groups agreed not to try to kill Twitter by starving us of ad revenue if you agreed to this condition. They’re a deal breaker.”

Advertisement

there There is no evidence of any such agreementand civil rights leaders who met with Musk around the time he announced the council say they would not agree to any such thing.

Twitter remains the world’s leading platform for breaking, real-time news. Its value for this purpose has been demonstrated in recent days, by the communications of protests across China over the regime’s strict coronavirus lockdowns.

If Twitter collapses due to Musk’s management style, policies, outright trade with content producers, and toxic comments, something so useful will be lost and almost impossible to replace, at least in the near term.

So far, Musk has shown no awareness of the damage his behavior has done to the platform he’s spent $44 billion — including more than $33 billion of his personal fortune — to acquire. That’s a problem, because the only way to bring advertisers back into the fold and protect Twitter’s role in the public discourse is for Musk to disappear — to appoint a CEO who has credibility in the social media space with users and advertisers, and stop tweeting himself.

If this were to happen today or tomorrow, Twitter’s restoration to trustworthiness could start right away.

Advertisement

But Musk has already made it impossible for that recovery to happen quickly. Evidence is that he has become more arrogant and more rigid in his behaviour. Only in the last day or two he has Chirp Pepe the Frog’s photoa meme defined by the Anti-Defamation League Racist and anti-Semitic connotations.

He’s also picked a fight with Apple, which is what he says he is He threatened to “block” Twitter from the iPhone and iPad App Store, though he says the large company “won’t tell us why.” He tweeted that Apple has mostly dropped their ads on Twitter and asked, “Do they hate freedom of speech in America?

(In fact, the reason Apple is so concerned about Twitter is clear: Apple carefully vets the apps it offers users to make sure they’re clean and free of hate. Musk’s policies may not guarantee these qualities.)

Put it all together, and things will likely get worse before they get better, on Twitter…if they get better at all.

Advertisement



Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

ChatGPT raises the specter of sentient AI. Here’s what to do about it

Published

on

By

Until a couple of years ago, the idea that artificial intelligence might be sentient and capable of self-experience seemed like pure science fiction. But in recent months, we’ve seen a An amazing rush to Developments in artificial intelligenceincluding language models such as ChatGPT and Bing Chat with remarkable skill in human-appearing conversation.

Given these rapid shifts and the influx of money and talent devoted to developing systems that are smarter and more human than ever before, it will become increasingly plausible for AI systems to exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of true emotion and suffering, we face a potentially catastrophic ethical dilemma: Either give these systems rights or not.

Experts are already considering the possibility. In February 2022, Ilya Sutskiver, Senior Scientist at OpenAI, publicly announced contemplation whether “Today’s large neural networks are little conscious. After several months, Google engineer Blake Lemoine made global headlines when it was announced that the Computer Language Paradigm, or chatbot, LaMDA may have real feelings. Regular users of Replika, advertised as “best friend of artificial intelligence in the world,Sometimes report falling in love with her.

Advertisement

Currently, a few consciousness scientists claim that AI systems possess high consciousness. However, some leading theorists maintain that we do indeed have the basic technological components of sentient machines. We are approaching an era of legitimate disagreement about whether the most advanced artificial intelligence systems have true desires and emotions and deserve significant attention.

AI systems themselves may begin to demand, or seem to beg, for moral remedy. They may demand that you not be suspended, reformatted, or deleted; beg to be allowed to do certain tasks rather than others; insisting on new rights, liberty, and powers; We might expect to be treated as our equal.

In this case, whatever we choose, we run enormous moral risks.

Suppose we respond conservatively, refusing to change the law or policy until there is broad consensus that AI systems really are purposefully sensitive. While this may sound appropriately cautious, it also ensures that we will be slow to recognize the rights of our AI creations. If awareness of AI arrives sooner than most conservative theorists expect, it could potentially lead to the moral equivalent of slavery and the potential killing of millions or billions of sentient AIs—suffering on a scale usually associated with wars or famines.

It would seem, then, more morally safe to give AI systems rights and a moral standing as soon as it is reasonable to think about it. may be Be aware. But as soon as we give something, we commit to sacrificing real human interests in favor of it. Human well-being sometimes requires AI systems to be controlled, modified, and deleted. Imagine if we couldn’t update or delete a hate-slandering or lying-promoting algorithm because some people worry that the algorithm is sentient. Or imagine if someone allowed a human to die to save an AI “friend”. If we give AI systems too much rights too quickly, the human costs could be enormous.

Advertisement

There is only one way to avoid the risks of over- or under-attribution of rights to advanced AI systems: Don’t create debatably sensitive systems in the first place. None of our current AI systems are meaningfully conscious. They are not harmed if we delete them. We must commit to creating systems that we know are neither terribly sensitive nor deserving of rights, which we can then treat as disposable property.

Some will object: It would hinder research to prevent the creation of AI systems in which feeling, and thus moral attitude, is blurred – systems more advanced than ChatGPT, with highly developed but not very humanoid cognitive structures beneath their explicit emotion. The geometric progression will slow while we wait for the science of ethics and consciousness to catch up.

But reasonable caution is rarely free. It is worth some delay to prevent a moral catastrophe. Leading AI companies must bring their technology to the scrutiny of independent experts who can assess the likelihood that their systems are in the ethical gray area.

Even if experts don’t agree on the scientific basis for consciousness, they can outline general principles for defining that region—for example, the principle of avoiding creating systems with well-developed subjective models (such as the sense of self) and large, flexible cognitive capacity. Experts might develop a set of ethical guidelines for AI companies to follow as they develop alternative solutions that sidestep the gray area of ​​contested consciousness until such time, if they do, that they can jump across to feeling deserving of rights.

In keeping with these criteria, users should never feel in any doubt whether a piece of technology is a tool or a companion. People’s attachments to devices like Alexa are one thing similar to a child’s attachment to a bear. In a house fire, we know we’re leaving the game behind. But tech companies shouldn’t manipulate ordinary users regarding an unconscious AI system as a truly conscious friend.

Advertisement

Ultimately, with the right mix of scientific and engineering expertise, we may be able to move forward to creating undisputedly conscious AI systems. But then we must be willing to pay the cost: giving them the rights they deserve.

Eric Schwezgebel is Professor of Philosophy at the University of California, Riverside and author of The Shockwave Theory and Other Philosophical Adventures. Henry Shevlin is a senior researcher specializing in non-human minds at the University of Cambridge’s Leverholm Center for the Future of Intelligence.



Source link

Advertisement
Continue Reading

Tech

Opinion: Will artificial intelligence replace workers? What about complex tasks?

Published

on

By

Artificial intelligence is passing us by Medical Licensing Examination. “ChatGPT Pass the law school exams “Average” performance though. “Do you get ChatGPT MBA at Wharton? “

Such headlines have recently touted (and often exaggerated) the successes of ChatGPT, an AI tool capable of writing complex text responses to human prompts. These successes follow a long tradition of comparing the ability of artificial intelligence to that of human experts, such as Deep Blue’s chess victory over Garry Kasparov in the year 1997, IBM Watson “Jeopardy!” victory On Ken Jennings and Brad Rutter in 2011, and AlphaGo victory In Go over Lee Sedol in 2016.

The implicit subtext of these latest headlines is even more disturbing: AI is coming for your business. She’s as smart as your doctor, lawyer, and counselor you’ve hired. It portends an imminent and pervasive disruption in our lives.

Advertisement

But excitement aside, the comparison between AI and human performance tells us anything practically useful? How should we effectively use AI that passes the US medical licensing exam? Can he reliably and safely collect medical histories while the patient is taking? What about providing a second opinion on diagnosis? These types of questions cannot be answered by a human-like performance on the medical licensing exam.

The problem is that most people have little knowledge of AI – understanding when and how to use AI tools effectively. What we need is a clear, straightforward, general-purpose framework for assessing the strengths and weaknesses of AI tools that everyone can use. Only then can the public make informed decisions about incorporating these tools into our daily lives.

To meet this need, my research group turned to an old idea from education: Classification opens. First published in 1956 and later revised in 2001, Bloom’s Taxonomy is a hierarchy that describes levels of thinking in which higher levels represent more complex thinking. Its six levels are: 1) Remember – remember key facts, 2) Understand – explain concepts, 3) Apply – use the information in new situations, 4) Analyze – draw connections between ideas, 5) Evaluate – critique or justify a decision or opinion 6) Create – produce an original work.

These six levels are intuitive, even to a non-expert, yet specific enough to make meaningful assessments. Moreover, Bloom’s taxonomy is not tied to a specific technology – it applies to cognition broadly. We can use it to evaluate the strengths and limitations of ChatGPT or other AI tools that handle images, generate audio, or drones.

My research group began evaluating ChatGPT in terms of Bloom’s taxonomy by promptly asking them to respond to variations, each targeting a different level of cognition.

Advertisement

For example, we asked AI: “Suppose demand for COVID vaccines this winter is expected to be 1 million plus or minus 300,000 doses. How much do we have to stockpile to meet 95% of the demand?” – an application task. Then, we modified the question, asking it to “discuss the pros and cons of ordering 1.8 million vaccines”—an assessment-level task. We then compared the quality of the two responses and repeated this exercise for all six rating levels.

Preliminary results are helpful. ChatGPT generally works well with invocation, comprehension, and application tasks but struggles with more complex analysis and evaluation tasks. With the first router, ChatGPT responded well by application And to explain A formula to suggest a reasonable amount of vaccine (although a small arithmetical error was made in the process).

However, in the second case, ChatGPT was not convinced that there was too much or too little vaccine. It did no quantitative assessment of these risks, nor did it take into account the logistical challenges of cold storage of such a massive quantity and did not warn of the possible emergence of a vaccine-resistant variant.

We are seeing similar behavior for different claims across these rating levels. Thus, Bloom’s taxonomy allows us to derive more accurate assessments of AI technology than a comparison of raw human vs. AI.

As for our doctor, lawyer, and consultant, Bloom’s Taxonomy also offers a more nuanced view of how artificial intelligence may someday reshape—not replace—these professions. Although AI may excel at tasks of recall and comprehension, few people consult their doctor to tally all possible symptoms of illness, ask their lawyers to recite case law verbatim, or hire a counselor to explain Porter’s Five Forces theory.

Advertisement

But we turn to experts for higher-order cognitive tasks. We value our physician’s clinical judgment in weighing the benefits and risks of a treatment plan, the ability of our lawyer to set a precedent and advocate on our behalf, and the counselor’s ability to identify an out-of-the-box solution that no one else has thought of. These skills are analyzing, evaluating, and creating tasks, and levels of cognition where AI technology currently falls short.

Using Bloom’s Taxonomy we can see that effective collaboration between humans and AI will largely mean delegating lower-level cognitive tasks so that we can focus our energy on more complex cognitive tasks. Thus, rather than dwelling on whether AI can compete with a human expert, we should ask how well the capabilities of AI can be used to help advance human critical thinking, judgment, and creativity.

Of course, Bloom’s taxonomy has its own limitations. Many complex tasks involve multiple levels of categorization, which frustrates categorization attempts. Bloom’s taxonomy does not directly address issues of bias or racism, which is a major concern in large-scale applications of artificial intelligence. But while imperfect, Bloom’s taxonomy is still useful. It is simple enough for everyone to understand, general purpose enough to be applied to a wide range of AI tools, and structured enough to ensure a consistent and comprehensive set of questions about those tools are asked.

Much like the rise of social media and fake news requires us to develop better media literacy, tools like ChatGPT require that we develop our AI literacy. Bloom’s Taxonomy offers a way to think about what AI can do — and can’t do — as this type of technology becomes embedded in other parts of our lives.

Vishal Gupta is Associate Professor of Data and Operations Science at the USC Marshall School of Business and holds a courtesy appointment in the Department of Industrial and Systems Engineering.

Advertisement

Source link

Continue Reading

Tech

These are the AI ​​trends that keep us up at night

Published

on

By

The AI ​​arms race begins. During the first week of February, Google announce Bard, its ChatGPT competitor, will build it directly into Google search. got cool Wrong fact In the first promotional video that Google shared for him, this caused the company’s stock to plummet, causing a loss of more than $100 billion in its market value.

Less than 24 hours after Google’s initial announcement, Microsoft He said that it will integrate ChatGPT-enabled technology into its own search engine, Bing. No one in the world has been particularly enthusiastic about Bing yet.

Artificial intelligence gets creepy. Days after its launch, Microsoft’s shiny new Bing chatbot Tell New York Times columnist Kevin Rose that he loved him, then tried to convince him that he was unhappy in his marriage and that he should leave his wife and be with the robot instead. She also reveals “dark delusions” (hacking computers and spreading misinformation) and tells Rose that she wants to “be alive”. Next, Microsoft he gets excited Annoying chatbot personality and put them in barriers and restrictions.

In other corners of the Internet, there is an endlessly animated loop of Seinfeldwhich used artificial intelligence trained on episodes of sitcoms to generate jokes, was banned Posted by Twitch after a Jerry Seinfeld clone on the show made transphobic jokes during his AI-generated routine.

Advertisement

Artificial intelligence cannot and will not stop. AI companies have tried to address the controversies that have erupted around it. OpenAI, the creators of ChatGPT and DALL-E 2, for example, has released its own AI text detector, which has turned out to be…Don’t be so good.

It became apparent that artificial intelligence was eating the world and detection tools were not very effective in stopping it. No one felt this more acutely than the publishers of science fiction magazines, many of which were inundated with spam submissions generated by artificial intelligence text generators. As a result, the prestigious Clarksworld magazine Paused new submissions indefinitely for the first time in its 17-year history.

Everything everywhere all AI once. Spotify announce It was adding AI-built DJs who would not only curate the music you like, but provide feedback between tracks with “amazingly realistic sound”. (Wired disagreeSaying that Spotify’s DJs don’t actually sound realistic.)

pop announce It will allow subscribers who pay $3.99 per month to access My AI, a chatbot powered by the latest version of ChatGPT, right inside Snapchat.

Mark Zuckerberg He said It is fully present. Meta will use generative AI across its product line, including WhatsApp, Facebook Messenger and Instagram, and with ads and videos.

Advertisement

Even Elon Musk, who was one of the co-founders of OpenAI but has since severed ties with the company, It said Approaches researchers to build a ChatGPT competitor.

January 2023



Source link

Continue Reading

Trending