Alex Klein’s technical inventions have passed through the hands of some of the most powerful people in government, music, and business. Barack Obama, Microsoft CEO Satya Nadella, former British Prime Minister Boris Johnson, and former New York City Mayor Michael Bloomberg have all promoted music creation hardware, personal computers, and software in the 32-year-old Londoner. This year, his company, Kanolaunched a digital music player in the form of a sand dollar that he hoped would become a transformative new way of listening to music and a distribution platform.
However, a fan of Kano’s Stem Player angered Klein. Kanye West, the artist who now performs as Ye, released his latest album “Donda 2” exclusively on Stem Player in February. At first it seemed like a coup for Klein—a futuristic iPod/DJ booth/sculpture, packed with an album from the music and fashion giant.
Eight months later, West rolled a tear at the antisemitic, conspiratorial, and far-right rhetoric that His career evaporated. His longtime record deal with Def Jam concluded; talent agency CAA dropped it And fashion partners including Adidas, Gap and Balenciaga have cut ties or let deals expire, which could cost him billions.
Klein was one of the last collaborators to collaborate with West prior to Spiral. His promising tech company is left with a flagship product whose most famous user has insulted himself, insulted Klein (who is half-Jewish) and tried to take control of his company. West and Kanoo are also facing licensing lawsuits over West’s samples on “Donda 2”.
Advertisement
Alex Klein holds his own stem player.
(Alex Cline/Kano)
Klein tells The Times that over the past year, his feelings have erupted between elation at what Kano achieved with Stem Player and angry disbelief at West’s actions since. He’s hopeful the company can move forward from its partnership with West, and he’s already launched a new iteration of the device, so any artist can upload their own music. But after surviving a front row seat due to the rapper’s meltdown, Klein tries to understand who she’s become.
Klein said in an email interview from his home in London that Kanye was “very funny and can be very outspoken”. “But these modern things are a lot different.”
Advertisement
Klein, a soft-spoken, meticulous engineer who wears a freelance uniform of black glasses and plaid buttons, founded his company eight years ago using a laptop and home software kit. Kano has now sold more than 1.5 million units of its various products and has more than 100 employees. In 2019, they had high hopes for the new Stem Player to change how artists and fans interact physically with music.
The soft-shell amplifier breaks tracks into components (vocals, drums, bass and arrangements) and users can manually remix them. “Our physical products are more human,” Klein said. “You can build on them, like Legos.” Artists on Stem own the rights to their music, get paid what they want and keep all profits from the content.
In 2019, West reached out to Kano, smitten with the company’s tactile approach to computing. “He told me he wanted ‘this clear psychedelic tablet,’” Klein said, referring to a demonstration of one of Kano’s computers. “I brought a bunch of our technology over to his house, which he loved. He asked me if he could put his album on our speaker. He also asked me to teach him how to code.”
While aware of West’s ups and downs, Klein said, “I’ve always been a fan of his music.” “There was so much beauty in every one of the elements we were hearing in the studio. I loved working with him, I loved him as a person. We spent some time together in Cody[Wyoming]where the music was being composed for ‘Jesus Is King,’ which was an unbelievable time.” It is forgotten and I am very grateful to him.”
Kano prepared to release Stem Player as a standalone product in 2022, when West asked at the last minute if the device could ship with files for “Donda 2” as an exclusive edition. Klein agreed—the new Ye LP is sure to turn heads. But there were squabbles early on. Unfortunately, Kanye did not want to allow other music artists onto the platform. “This was a disagreement that we had a hard time resolving,” Klein said. West offered to buy the company and the rights to the Stem player, but Klein refused.
Advertisement
Alex Klein at West’s ranch in Cody, WA during the “Jesus Is King” recording sessions.
(Courtesy of Kano/Alex Klein)
Nevertheless, Kano sold over 100,000 of the first batch of Stem Players at $200 apiece. While reviews were mixed for the album and its unorthodox delivery system, it fits with West’s history of experimentation (he famously went on to remaster his 2016 album, “The Life of Pablo,” weeks after its release). West’s visual aesthetic of minimalism has moved billions of dollars in footwear, and Stem Player feels in keeping with his fashion accomplishments.
Eight months later, it all fell apart.
Beginning in October, West posted threats on social media to go “3 death to the Jewish people,” violating abortion rights and appeared infrequently on The Tucker Carlson Show and on the “Drink Champs” podcast, where his antisemitic remarks shocked and upset his family, fans and business associates. . His purchase of the right-wing social networking site Parler confirmed that he was delving deeper into the far-right ecosystem.
Advertisement
Klein watched, in amazement, as the public face of his new product became the most venomous figure in popular culture. Asked how West’s anti-Semitism affected him personally, Klein said, “These comments deserve no further comment.”
“West tried to call me a racist when I kindly told him that attacking an entire group of people isn’t good for him or Steam,” Klein wrote in a Reddit post. In a conversation on Discord last week, Klein said that “Good engineering is about getting the right information and acting on it…at the end of the day, as long as what’s flowing through Ye is hate toward a certain ethnic group…it’s very hard for us to collaborate creatively .”
“I told Kanye not to go the way he’s going,” Klein said. “We told him we weren’t able to work together while he brought up racial conspiracy theories.” Klein said he had dissolved all trade relations with the West. “There is no deal in place,” Klein said.
West, who currently has no legal or public relations representation, could not be reached for comment.
Kanye West at the “Donda” listening event in July 2021.
(Kevin Mazur/Getty Images for Universal Music)
Advertisement
Their relationship grew further after Klein and Cano were named in a lawsuit that alleged West’s song “Flowers” used unlicensed samples from “Move Your Body,” Marshall Jefferson’s 1986 house music single. Phase One Network, which oversees Boogie Down Productions’ catalog, sued West and Kanoo, alleging that “Life of the Party” used unlicensed samples from KRS-One and DJ Scott La Rock’s 1986 single “South Bronx”.
In a statement about the lawsuit, the company said: “Kanye Wizzy has confirmed to Team Kano and Steam that they will provide to the music “all intellectual property rights, licenses, and approvals.”
Since West’s breakup, Klein and Kano have moved forward with a new, Ye-free version of Stem Player, open to all artists and hobbyists to upload music and play with mixes. While Stem Player has likely been associated with “Donda 2” for some time, it has been used to remix over a billion songs, and Klein said more than 90% of the traffic on the platform is unrelated to West. Klein hopes that future iterations can use the extensive music catalog to “deepen people’s understanding of what they love”.
Meanwhile, to de-stress over the launch of the Stem Player, Klein and a group of friends climbed Mont Blanc, Europe’s highest mountain, this summer.Like many in West’s orbit, he had to reconcile the music he adored with the ugliness of West’s recent downfall. In Stem Player, there is at least one neat solution for that.
Advertisement
“Stem lets you customize the music,” Klein said. “You can turn down Ye’s voice if you like.”
Until a couple of years ago, the idea that artificial intelligence might be sentient and capable of self-experience seemed like pure science fiction. But in recent months, we’ve seen a An amazing rush to Developments in artificial intelligenceincluding language models such as ChatGPT and Bing Chat with remarkable skill in human-appearing conversation.
Given these rapid shifts and the influx of money and talent devoted to developing systems that are smarter and more human than ever before, it will become increasingly plausible for AI systems to exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of true emotion and suffering, we face a potentially catastrophic ethical dilemma: Either give these systems rights or not.
Currently, a few consciousness scientists claim that AI systems possess high consciousness. However, some leading theorists maintain that we do indeed have the basic technological components of sentient machines. We are approaching an era of legitimate disagreement about whether the most advanced artificial intelligence systems have true desires and emotions and deserve significant attention.
AI systems themselves may begin to demand, or seem to beg, for moral remedy. They may demand that you not be suspended, reformatted, or deleted; beg to be allowed to do certain tasks rather than others; insisting on new rights, liberty, and powers; We might expect to be treated as our equal.
In this case, whatever we choose, we run enormous moral risks.
Suppose we respond conservatively, refusing to change the law or policy until there is broad consensus that AI systems really are purposefully sensitive. While this may sound appropriately cautious, it also ensures that we will be slow to recognize the rights of our AI creations. If awareness of AI arrives sooner than most conservative theorists expect, it could potentially lead to the moral equivalent of slavery and the potential killing of millions or billions of sentient AIs—suffering on a scale usually associated with wars or famines.
It would seem, then, more morally safe to give AI systems rights and a moral standing as soon as it is reasonable to think about it. may be Be aware. But as soon as we give something, we commit to sacrificing real human interests in favor of it. Human well-being sometimes requires AI systems to be controlled, modified, and deleted. Imagine if we couldn’t update or delete a hate-slandering or lying-promoting algorithm because some people worry that the algorithm is sentient. Or imagine if someone allowed a human to die to save an AI “friend”. If we give AI systems too much rights too quickly, the human costs could be enormous.
Advertisement
There is only one way to avoid the risks of over- or under-attribution of rights to advanced AI systems: Don’t create debatably sensitive systems in the first place. None of our current AI systems are meaningfully conscious. They are not harmed if we delete them. We must commit to creating systems that we know are neither terribly sensitive nor deserving of rights, which we can then treat as disposable property.
Some will object: It would hinder research to prevent the creation of AI systems in which feeling, and thus moral attitude, is blurred – systems more advanced than ChatGPT, with highly developed but not very humanoid cognitive structures beneath their explicit emotion. The geometric progression will slow while we wait for the science of ethics and consciousness to catch up.
But reasonable caution is rarely free. It is worth some delay to prevent a moral catastrophe. Leading AI companies must bring their technology to the scrutiny of independent experts who can assess the likelihood that their systems are in the ethical gray area.
Even if experts don’t agree on the scientific basis for consciousness, they can outline general principles for defining that region—for example, the principle of avoiding creating systems with well-developed subjective models (such as the sense of self) and large, flexible cognitive capacity. Experts might develop a set of ethical guidelines for AI companies to follow as they develop alternative solutions that sidestep the gray area of contested consciousness until such time, if they do, that they can jump across to feeling deserving of rights.
In keeping with these criteria, users should never feel in any doubt whether a piece of technology is a tool or a companion. People’s attachments to devices like Alexa are one thing similar to a child’s attachment to a bear. In a house fire, we know we’re leaving the game behind. But tech companies shouldn’t manipulate ordinary users regarding an unconscious AI system as a truly conscious friend.
Advertisement
Ultimately, with the right mix of scientific and engineering expertise, we may be able to move forward to creating undisputedly conscious AI systems. But then we must be willing to pay the cost: giving them the rights they deserve.
Eric Schwezgebel is Professor of Philosophy at the University of California, Riverside and author of The Shockwave Theory and Other Philosophical Adventures. Henry Shevlin is a senior researcher specializing in non-human minds at the University of Cambridge’s Leverholm Center for the Future of Intelligence.
Such headlines have recently touted (and often exaggerated) the successes of ChatGPT, an AI tool capable of writing complex text responses to human prompts. These successes follow a long tradition of comparing the ability of artificial intelligence to that of human experts, such as Deep Blue’s chess victory over Garry Kasparov in the year 1997, IBM Watson “Jeopardy!” victory On Ken Jennings and Brad Rutter in 2011, and AlphaGo victory In Go over Lee Sedol in 2016.
The implicit subtext of these latest headlines is even more disturbing: AI is coming for your business. She’s as smart as your doctor, lawyer, and counselor you’ve hired. It portends an imminent and pervasive disruption in our lives.
Advertisement
But excitement aside, the comparison between AI and human performance tells us anything practically useful? How should we effectively use AI that passes the US medical licensing exam? Can he reliably and safely collect medical histories while the patient is taking? What about providing a second opinion on diagnosis? These types of questions cannot be answered by a human-like performance on the medical licensing exam.
The problem is that most people have little knowledge of AI – understanding when and how to use AI tools effectively. What we need is a clear, straightforward, general-purpose framework for assessing the strengths and weaknesses of AI tools that everyone can use. Only then can the public make informed decisions about incorporating these tools into our daily lives.
To meet this need, my research group turned to an old idea from education: Classification opens. First published in 1956 and later revised in 2001, Bloom’s Taxonomy is a hierarchy that describes levels of thinking in which higher levels represent more complex thinking. Its six levels are: 1) Remember – remember key facts, 2) Understand – explain concepts, 3) Apply – use the information in new situations, 4) Analyze – draw connections between ideas, 5) Evaluate – critique or justify a decision or opinion 6) Create – produce an original work.
These six levels are intuitive, even to a non-expert, yet specific enough to make meaningful assessments. Moreover, Bloom’s taxonomy is not tied to a specific technology – it applies to cognition broadly. We can use it to evaluate the strengths and limitations of ChatGPT or other AI tools that handle images, generate audio, or drones.
My research group began evaluating ChatGPT in terms of Bloom’s taxonomy by promptly asking them to respond to variations, each targeting a different level of cognition.
Advertisement
For example, we asked AI: “Suppose demand for COVID vaccines this winter is expected to be 1 million plus or minus 300,000 doses. How much do we have to stockpile to meet 95% of the demand?” – an application task. Then, we modified the question, asking it to “discuss the pros and cons of ordering 1.8 million vaccines”—an assessment-level task. We then compared the quality of the two responses and repeated this exercise for all six rating levels.
Preliminary results are helpful. ChatGPT generally works well with invocation, comprehension, and application tasks but struggles with more complex analysis and evaluation tasks. With the first router, ChatGPT responded well by application And to explain A formula to suggest a reasonable amount of vaccine (although a small arithmetical error was made in the process).
However, in the second case, ChatGPT was not convinced that there was too much or too little vaccine. It did no quantitative assessment of these risks, nor did it take into account the logistical challenges of cold storage of such a massive quantity and did not warn of the possible emergence of a vaccine-resistant variant.
We are seeing similar behavior for different claims across these rating levels. Thus, Bloom’s taxonomy allows us to derive more accurate assessments of AI technology than a comparison of raw human vs. AI.
As for our doctor, lawyer, and consultant, Bloom’s Taxonomy also offers a more nuanced view of how artificial intelligence may someday reshape—not replace—these professions. Although AI may excel at tasks of recall and comprehension, few people consult their doctor to tally all possible symptoms of illness, ask their lawyers to recite case law verbatim, or hire a counselor to explain Porter’s Five Forces theory.
Advertisement
But we turn to experts for higher-order cognitive tasks. We value our physician’s clinical judgment in weighing the benefits and risks of a treatment plan, the ability of our lawyer to set a precedent and advocate on our behalf, and the counselor’s ability to identify an out-of-the-box solution that no one else has thought of. These skills are analyzing, evaluating, and creating tasks, and levels of cognition where AI technology currently falls short.
Using Bloom’s Taxonomy we can see that effective collaboration between humans and AI will largely mean delegating lower-level cognitive tasks so that we can focus our energy on more complex cognitive tasks. Thus, rather than dwelling on whether AI can compete with a human expert, we should ask how well the capabilities of AI can be used to help advance human critical thinking, judgment, and creativity.
Of course, Bloom’s taxonomy has its own limitations. Many complex tasks involve multiple levels of categorization, which frustrates categorization attempts. Bloom’s taxonomy does not directly address issues of bias or racism, which is a major concern in large-scale applications of artificial intelligence. But while imperfect, Bloom’s taxonomy is still useful. It is simple enough for everyone to understand, general purpose enough to be applied to a wide range of AI tools, and structured enough to ensure a consistent and comprehensive set of questions about those tools are asked.
Much like the rise of social media and fake news requires us to develop better media literacy, tools like ChatGPT require that we develop our AI literacy. Bloom’s Taxonomy offers a way to think about what AI can do — and can’t do — as this type of technology becomes embedded in other parts of our lives.
Vishal Gupta is Associate Professor of Data and Operations Science at the USC Marshall School of Business and holds a courtesy appointment in the Department of Industrial and Systems Engineering.
The AI arms race begins. During the first week of February, Google announce Bard, its ChatGPT competitor, will build it directly into Google search. got cool Wrong fact In the first promotional video that Google shared for him, this caused the company’s stock to plummet, causing a loss of more than $100 billion in its market value.
Less than 24 hours after Google’s initial announcement, Microsoft He said that it will integrate ChatGPT-enabled technology into its own search engine, Bing. No one in the world has been particularly enthusiastic about Bing yet.
Artificial intelligence gets creepy. Days after its launch, Microsoft’s shiny new Bing chatbot Tell New York Times columnist Kevin Rose that he loved him, then tried to convince him that he was unhappy in his marriage and that he should leave his wife and be with the robot instead. She also reveals “dark delusions” (hacking computers and spreading misinformation) and tells Rose that she wants to “be alive”. Next, Microsoft he gets excited Annoying chatbot personality and put them in barriers and restrictions.
In other corners of the Internet, there is an endlessly animated loop of Seinfeldwhich used artificial intelligence trained on episodes of sitcoms to generate jokes, was banned Posted by Twitch after a Jerry Seinfeld clone on the show made transphobic jokes during his AI-generated routine.
Advertisement
Artificial intelligence cannot and will not stop. AI companies have tried to address the controversies that have erupted around it. OpenAI, the creators of ChatGPT and DALL-E 2, for example, has released its own AI text detector, which has turned out to be…Don’t be so good.
It became apparent that artificial intelligence was eating the world and detection tools were not very effective in stopping it. No one felt this more acutely than the publishers of science fiction magazines, many of which were inundated with spam submissions generated by artificial intelligence text generators. As a result, the prestigious Clarksworld magazine Paused new submissions indefinitely for the first time in its 17-year history.
Everything everywhere all AI once. Spotify announce It was adding AI-built DJs who would not only curate the music you like, but provide feedback between tracks with “amazingly realistic sound”. (Wired disagreeSaying that Spotify’s DJs don’t actually sound realistic.)
pop announce It will allow subscribers who pay $3.99 per month to access My AI, a chatbot powered by the latest version of ChatGPT, right inside Snapchat.
Mark Zuckerberg He said It is fully present. Meta will use generative AI across its product line, including WhatsApp, Facebook Messenger and Instagram, and with ads and videos.
Advertisement
Even Elon Musk, who was one of the co-founders of OpenAI but has since severed ties with the company, It said Approaches researchers to build a ChatGPT competitor.