Connect with us

Tech

Roku will cut 200 jobs in the US, citing the weak ad market

Published

on

Roku, the San Jose-based technology company that has expanded rapidly during the pandemic, said Thursday that it plans to cut 200 jobs in the United States, citing “current economic conditions.”

The company, which sells connected TVs and ads on its streaming platform, said the layoffs will be “largely complete” by the end of the first quarter, according to a document filed with the US Securities and Exchange Commission on Thursday.

A Roku spokesperson did not immediately respond to a request for comment.

Advertisement

The job cuts come as more entertainment and technology companies look forward cutting costs In an increasingly uncertain economic environment. Facebook parent meta layoff 11,000 employees – or 13% of its workforce – and Amazon plans to cut up to 10,000 jobs.

In a letter to shareholders earlier this month, Roku CEO Anthony Wood discussed the difficult climate. Wood wrote that ad spending on Roku’s platform grew more slowly than he had previously forecast due in part to weakness in the TV ad market.

“As we enter the holiday season, we expect the macro environment to put more pressure on consumers’ discretionary spending and reduce advertising budgets, especially in the TV penetration market,” Wood said in a letter to shareholders this month. “We expect these conditions to be temporary, but it is difficult to predict when they will stabilize or rebound.”

Wood said the company expects fourth-quarter revenue to be $800 million, down from $865.3 million in the fourth quarter of 2021.

Roku has employed 3,000 employees globally as of December 31, 2021.

Advertisement

During the COVID-19 pandemic, the company has expanded its presence in Southern California where it has developed its catalog of original content, Double the size From Team Santa Monica to over 200 employees last year.

Roku is best known as a platform where consumers can connect to various streaming services, including free ads that support them Roku Channel.

The company gets a portion of the subscriptions or software sold through its platform and also makes money from selling ads through its platform and on its free streaming channel.

Roku also sells hardware, including connected TVs and smart home products such as security cameras.

Some analysts have expressed skepticism about Roku’s business model.

Advertisement

In a note titled “Roku Appears Broku,” Jeffrey Wlodarczak, a principal at Pivotal Research Group, questioned whether execs were overreacting to large advertisers when Roku saw a surge in business during the pandemic as consumers flocked to its streaming services.

“Our view is that the TV/digital advertising background isn’t great, but there seems to be something specific going on at ROKU that seems to have exacerbated the problem significantly,” Wlodarczak wrote. It has a sell rating on the company’s stock, which closed Thursday at $56.41 a share, down 0.8%.

Other entertainment companies have also taken steps to downsize. Last week, Disney CEO Bob Chapek said the company was implementing a hiring freeze The number of employees is expected to be reduced. Warner Bros. has also. Discovery, Netflix and other media companies to cut jobs.

Source link

Advertisement
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

ChatGPT raises the specter of sentient AI. Here’s what to do about it

Published

on

By

Until a couple of years ago, the idea that artificial intelligence might be sentient and capable of self-experience seemed like pure science fiction. But in recent months, we’ve seen a An amazing rush to Developments in artificial intelligenceincluding language models such as ChatGPT and Bing Chat with remarkable skill in human-appearing conversation.

Given these rapid shifts and the influx of money and talent devoted to developing systems that are smarter and more human than ever before, it will become increasingly plausible for AI systems to exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of true emotion and suffering, we face a potentially catastrophic ethical dilemma: Either give these systems rights or not.

Experts are already considering the possibility. In February 2022, Ilya Sutskiver, Senior Scientist at OpenAI, publicly announced contemplation whether “Today’s large neural networks are little conscious. After several months, Google engineer Blake Lemoine made global headlines when it was announced that the Computer Language Paradigm, or chatbot, LaMDA may have real feelings. Regular users of Replika, advertised as “best friend of artificial intelligence in the world,Sometimes report falling in love with her.

Advertisement

Currently, a few consciousness scientists claim that AI systems possess high consciousness. However, some leading theorists maintain that we do indeed have the basic technological components of sentient machines. We are approaching an era of legitimate disagreement about whether the most advanced artificial intelligence systems have true desires and emotions and deserve significant attention.

AI systems themselves may begin to demand, or seem to beg, for moral remedy. They may demand that you not be suspended, reformatted, or deleted; beg to be allowed to do certain tasks rather than others; insisting on new rights, liberty, and powers; We might expect to be treated as our equal.

In this case, whatever we choose, we run enormous moral risks.

Suppose we respond conservatively, refusing to change the law or policy until there is broad consensus that AI systems really are purposefully sensitive. While this may sound appropriately cautious, it also ensures that we will be slow to recognize the rights of our AI creations. If awareness of AI arrives sooner than most conservative theorists expect, it could potentially lead to the moral equivalent of slavery and the potential killing of millions or billions of sentient AIs—suffering on a scale usually associated with wars or famines.

It would seem, then, more morally safe to give AI systems rights and a moral standing as soon as it is reasonable to think about it. may be Be aware. But as soon as we give something, we commit to sacrificing real human interests in favor of it. Human well-being sometimes requires AI systems to be controlled, modified, and deleted. Imagine if we couldn’t update or delete a hate-slandering or lying-promoting algorithm because some people worry that the algorithm is sentient. Or imagine if someone allowed a human to die to save an AI “friend”. If we give AI systems too much rights too quickly, the human costs could be enormous.

Advertisement

There is only one way to avoid the risks of over- or under-attribution of rights to advanced AI systems: Don’t create debatably sensitive systems in the first place. None of our current AI systems are meaningfully conscious. They are not harmed if we delete them. We must commit to creating systems that we know are neither terribly sensitive nor deserving of rights, which we can then treat as disposable property.

Some will object: It would hinder research to prevent the creation of AI systems in which feeling, and thus moral attitude, is blurred – systems more advanced than ChatGPT, with highly developed but not very humanoid cognitive structures beneath their explicit emotion. The geometric progression will slow while we wait for the science of ethics and consciousness to catch up.

But reasonable caution is rarely free. It is worth some delay to prevent a moral catastrophe. Leading AI companies must bring their technology to the scrutiny of independent experts who can assess the likelihood that their systems are in the ethical gray area.

Even if experts don’t agree on the scientific basis for consciousness, they can outline general principles for defining that region—for example, the principle of avoiding creating systems with well-developed subjective models (such as the sense of self) and large, flexible cognitive capacity. Experts might develop a set of ethical guidelines for AI companies to follow as they develop alternative solutions that sidestep the gray area of ​​contested consciousness until such time, if they do, that they can jump across to feeling deserving of rights.

In keeping with these criteria, users should never feel in any doubt whether a piece of technology is a tool or a companion. People’s attachments to devices like Alexa are one thing similar to a child’s attachment to a bear. In a house fire, we know we’re leaving the game behind. But tech companies shouldn’t manipulate ordinary users regarding an unconscious AI system as a truly conscious friend.

Advertisement

Ultimately, with the right mix of scientific and engineering expertise, we may be able to move forward to creating undisputedly conscious AI systems. But then we must be willing to pay the cost: giving them the rights they deserve.

Eric Schwezgebel is Professor of Philosophy at the University of California, Riverside and author of The Shockwave Theory and Other Philosophical Adventures. Henry Shevlin is a senior researcher specializing in non-human minds at the University of Cambridge’s Leverholm Center for the Future of Intelligence.



Source link

Advertisement
Continue Reading

Tech

Opinion: Will artificial intelligence replace workers? What about complex tasks?

Published

on

By

Artificial intelligence is passing us by Medical Licensing Examination. “ChatGPT Pass the law school exams “Average” performance though. “Do you get ChatGPT MBA at Wharton? “

Such headlines have recently touted (and often exaggerated) the successes of ChatGPT, an AI tool capable of writing complex text responses to human prompts. These successes follow a long tradition of comparing the ability of artificial intelligence to that of human experts, such as Deep Blue’s chess victory over Garry Kasparov in the year 1997, IBM Watson “Jeopardy!” victory On Ken Jennings and Brad Rutter in 2011, and AlphaGo victory In Go over Lee Sedol in 2016.

The implicit subtext of these latest headlines is even more disturbing: AI is coming for your business. She’s as smart as your doctor, lawyer, and counselor you’ve hired. It portends an imminent and pervasive disruption in our lives.

Advertisement

But excitement aside, the comparison between AI and human performance tells us anything practically useful? How should we effectively use AI that passes the US medical licensing exam? Can he reliably and safely collect medical histories while the patient is taking? What about providing a second opinion on diagnosis? These types of questions cannot be answered by a human-like performance on the medical licensing exam.

The problem is that most people have little knowledge of AI – understanding when and how to use AI tools effectively. What we need is a clear, straightforward, general-purpose framework for assessing the strengths and weaknesses of AI tools that everyone can use. Only then can the public make informed decisions about incorporating these tools into our daily lives.

To meet this need, my research group turned to an old idea from education: Classification opens. First published in 1956 and later revised in 2001, Bloom’s Taxonomy is a hierarchy that describes levels of thinking in which higher levels represent more complex thinking. Its six levels are: 1) Remember – remember key facts, 2) Understand – explain concepts, 3) Apply – use the information in new situations, 4) Analyze – draw connections between ideas, 5) Evaluate – critique or justify a decision or opinion 6) Create – produce an original work.

These six levels are intuitive, even to a non-expert, yet specific enough to make meaningful assessments. Moreover, Bloom’s taxonomy is not tied to a specific technology – it applies to cognition broadly. We can use it to evaluate the strengths and limitations of ChatGPT or other AI tools that handle images, generate audio, or drones.

My research group began evaluating ChatGPT in terms of Bloom’s taxonomy by promptly asking them to respond to variations, each targeting a different level of cognition.

Advertisement

For example, we asked AI: “Suppose demand for COVID vaccines this winter is expected to be 1 million plus or minus 300,000 doses. How much do we have to stockpile to meet 95% of the demand?” – an application task. Then, we modified the question, asking it to “discuss the pros and cons of ordering 1.8 million vaccines”—an assessment-level task. We then compared the quality of the two responses and repeated this exercise for all six rating levels.

Preliminary results are helpful. ChatGPT generally works well with invocation, comprehension, and application tasks but struggles with more complex analysis and evaluation tasks. With the first router, ChatGPT responded well by application And to explain A formula to suggest a reasonable amount of vaccine (although a small arithmetical error was made in the process).

However, in the second case, ChatGPT was not convinced that there was too much or too little vaccine. It did no quantitative assessment of these risks, nor did it take into account the logistical challenges of cold storage of such a massive quantity and did not warn of the possible emergence of a vaccine-resistant variant.

We are seeing similar behavior for different claims across these rating levels. Thus, Bloom’s taxonomy allows us to derive more accurate assessments of AI technology than a comparison of raw human vs. AI.

As for our doctor, lawyer, and consultant, Bloom’s Taxonomy also offers a more nuanced view of how artificial intelligence may someday reshape—not replace—these professions. Although AI may excel at tasks of recall and comprehension, few people consult their doctor to tally all possible symptoms of illness, ask their lawyers to recite case law verbatim, or hire a counselor to explain Porter’s Five Forces theory.

Advertisement

But we turn to experts for higher-order cognitive tasks. We value our physician’s clinical judgment in weighing the benefits and risks of a treatment plan, the ability of our lawyer to set a precedent and advocate on our behalf, and the counselor’s ability to identify an out-of-the-box solution that no one else has thought of. These skills are analyzing, evaluating, and creating tasks, and levels of cognition where AI technology currently falls short.

Using Bloom’s Taxonomy we can see that effective collaboration between humans and AI will largely mean delegating lower-level cognitive tasks so that we can focus our energy on more complex cognitive tasks. Thus, rather than dwelling on whether AI can compete with a human expert, we should ask how well the capabilities of AI can be used to help advance human critical thinking, judgment, and creativity.

Of course, Bloom’s taxonomy has its own limitations. Many complex tasks involve multiple levels of categorization, which frustrates categorization attempts. Bloom’s taxonomy does not directly address issues of bias or racism, which is a major concern in large-scale applications of artificial intelligence. But while imperfect, Bloom’s taxonomy is still useful. It is simple enough for everyone to understand, general purpose enough to be applied to a wide range of AI tools, and structured enough to ensure a consistent and comprehensive set of questions about those tools are asked.

Much like the rise of social media and fake news requires us to develop better media literacy, tools like ChatGPT require that we develop our AI literacy. Bloom’s Taxonomy offers a way to think about what AI can do — and can’t do — as this type of technology becomes embedded in other parts of our lives.

Vishal Gupta is Associate Professor of Data and Operations Science at the USC Marshall School of Business and holds a courtesy appointment in the Department of Industrial and Systems Engineering.

Advertisement

Source link

Continue Reading

Tech

These are the AI ​​trends that keep us up at night

Published

on

By

The AI ​​arms race begins. During the first week of February, Google announce Bard, its ChatGPT competitor, will build it directly into Google search. got cool Wrong fact In the first promotional video that Google shared for him, this caused the company’s stock to plummet, causing a loss of more than $100 billion in its market value.

Less than 24 hours after Google’s initial announcement, Microsoft He said that it will integrate ChatGPT-enabled technology into its own search engine, Bing. No one in the world has been particularly enthusiastic about Bing yet.

Artificial intelligence gets creepy. Days after its launch, Microsoft’s shiny new Bing chatbot Tell New York Times columnist Kevin Rose that he loved him, then tried to convince him that he was unhappy in his marriage and that he should leave his wife and be with the robot instead. She also reveals “dark delusions” (hacking computers and spreading misinformation) and tells Rose that she wants to “be alive”. Next, Microsoft he gets excited Annoying chatbot personality and put them in barriers and restrictions.

In other corners of the Internet, there is an endlessly animated loop of Seinfeldwhich used artificial intelligence trained on episodes of sitcoms to generate jokes, was banned Posted by Twitch after a Jerry Seinfeld clone on the show made transphobic jokes during his AI-generated routine.

Advertisement

Artificial intelligence cannot and will not stop. AI companies have tried to address the controversies that have erupted around it. OpenAI, the creators of ChatGPT and DALL-E 2, for example, has released its own AI text detector, which has turned out to be…Don’t be so good.

It became apparent that artificial intelligence was eating the world and detection tools were not very effective in stopping it. No one felt this more acutely than the publishers of science fiction magazines, many of which were inundated with spam submissions generated by artificial intelligence text generators. As a result, the prestigious Clarksworld magazine Paused new submissions indefinitely for the first time in its 17-year history.

Everything everywhere all AI once. Spotify announce It was adding AI-built DJs who would not only curate the music you like, but provide feedback between tracks with “amazingly realistic sound”. (Wired disagreeSaying that Spotify’s DJs don’t actually sound realistic.)

pop announce It will allow subscribers who pay $3.99 per month to access My AI, a chatbot powered by the latest version of ChatGPT, right inside Snapchat.

Mark Zuckerberg He said It is fully present. Meta will use generative AI across its product line, including WhatsApp, Facebook Messenger and Instagram, and with ads and videos.

Advertisement

Even Elon Musk, who was one of the co-founders of OpenAI but has since severed ties with the company, It said Approaches researchers to build a ChatGPT competitor.

January 2023



Source link

Continue Reading

Trending