Connect with us

Tech

Men’s rights activists worshiped Elon Musk

Published

on

“It’s such a relief that Elon Musk is standing up for free speech and removing censorship,” SIFF co-founder Anil Murty told BuzzFeed News. (He declined to provide evidence that the group’s tweets were blocked in the shadows before Musk bought Twitter.)

Musk fired more than 70% of the company’s employees (including content moderators) since taking over Right-wing darling To restore more than 15,000 accounts, including accounts Donald Trump and far-right voices like Steve Bannon and MyPillow CEO Mike Lindell. But he is Also banned people who criticize him, including journalists, some of them still closed.

Morty said he was “grateful” to Musk for “a small contribution” to SIFF’s cause. Thanks Musk Repair From Twitter’s verification policy, Morty was finally able to pay for SIFF’s account verification, he said. “Earlier, verification was discriminatory,” said Morety. “Now it’s a lot fairer.”



Source link

Advertisement

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

These are the AI ​​trends that keep us up at night

Published

on

By

The AI ​​arms race begins. During the first week of February, Google announce Bard, its ChatGPT competitor, will build it directly into Google search. got cool Wrong fact In the first promotional video that Google shared for him, this caused the company’s stock to plummet, causing a loss of more than $100 billion in its market value.

Less than 24 hours after Google’s initial announcement, Microsoft He said that it will integrate ChatGPT-enabled technology into its own search engine, Bing. No one in the world has been particularly enthusiastic about Bing yet.

Artificial intelligence gets creepy. Days after its launch, Microsoft’s shiny new Bing chatbot Tell New York Times columnist Kevin Rose that he loved him, then tried to convince him that he was unhappy in his marriage and that he should leave his wife and be with the robot instead. She also reveals “dark delusions” (hacking computers and spreading misinformation) and tells Rose that she wants to “be alive”. Next, Microsoft he gets excited Annoying chatbot personality and put them in barriers and restrictions.

In other corners of the Internet, there is an endlessly animated loop of Seinfeldwhich used artificial intelligence trained on episodes of sitcoms to generate jokes, was banned Posted by Twitch after a Jerry Seinfeld clone on the show made transphobic jokes during his AI-generated routine.

Advertisement

Artificial intelligence cannot and will not stop. AI companies have tried to address the controversies that have erupted around it. OpenAI, the creators of ChatGPT and DALL-E 2, for example, has released its own AI text detector, which has turned out to be…Don’t be so good.

It became apparent that artificial intelligence was eating the world and detection tools were not very effective in stopping it. No one felt this more acutely than the publishers of science fiction magazines, many of which were inundated with spam submissions generated by artificial intelligence text generators. As a result, the prestigious Clarksworld magazine Paused new submissions indefinitely for the first time in its 17-year history.

Everything everywhere all AI once. Spotify announce It was adding AI-built DJs who would not only curate the music you like, but provide feedback between tracks with “amazingly realistic sound”. (Wired disagreeSaying that Spotify’s DJs don’t actually sound realistic.)

pop announce It will allow subscribers who pay $3.99 per month to access My AI, a chatbot powered by the latest version of ChatGPT, right inside Snapchat.

Mark Zuckerberg He said It is fully present. Meta will use generative AI across its product line, including WhatsApp, Facebook Messenger and Instagram, and with ads and videos.

Advertisement

Even Elon Musk, who was one of the co-founders of OpenAI but has since severed ties with the company, It said Approaches researchers to build a ChatGPT competitor.

January 2023



Source link

Continue Reading

Tech

The next step for surveillance AI: getting to know your friends

Published

on

By

A gray-haired man walks in an office lobby holding a cup of coffee, staring ahead as he passes through a doorway.

He seems unaware that he is being tracked by a network of cameras that can detect not only where he is but also who he’s been with.

Monitoring technology has always been able to identify you. Now, with the help of artificial intelligence, you are trying to find out who your friends are.

With a few clicks, this Common Appearance or Correlation Analysis program can find anyone who has appeared on the watch windows within a few minutes of the gray-haired male over the past month, and weed out those who may have been near him a time or two, never mind The man who appeared 14 times. The software can mark potential interactions between the two men, who are now considered potential partners, on an instantly searchable calendar.

Advertisement

Vintra, the San Jose-based company that showcased the technology at View industry video Last year, it sells syndication as part of its suite of video analytics tools. a company It prides itself on its website on relations With the San Francisco 49ers and the Florida Police Department. The Internal Revenue Service and additional police departments around the country have paid for Ventra’s services, according to the state contract database.

Although co-appearance technology is already in use by authoritarian regimes such as China, Vintra appears to be the first company to market it in the West, industry professionals say.

In the first frame, the presenter defines a “goal”. In the second, it found people who appeared in the same frame as him within 10 minutes. In the third photo, the camera captures the first person’s “assistant”.

(IPVM)

Advertisement

But the company is one of many AI and surveillance apps testing new apps with little public scrutiny and few formal safeguards against privacy infringement. In January, for example, New York state officials He criticized the company that owns Madison Square Garden For using facial recognition technology to ban employees of law firms that have sued the firm from attending events at the arena.

Industry experts and observers say that if the co-option tool is not in use now — and one analyst is certain it is — it will likely become more reliable and more widely available as AI capabilities advance.

None of the Vintra entities contacted by The Times has acknowledged the use of the common appearance feature in the Vintra software package. But some did not explicitly exclude it.

China’s government, which has been the most aggressive in using surveillance and artificial intelligence to control its population, is using appearance searches to spot protesters and dissidents by integrating video with a vast network of databases, something Vintra and its customers will not be able to do. said Connor Healy, director of government research for IPVM, the surveillance research group that hosted Vintra’s show last year. He said the Ventra technology could be used to create a “more basic version” of the Chinese government’s capabilities.

Some state and local governments in the United States restrict the use of facial recognition, particularly in policing, but there is no applicable federal law. There are no laws that expressly prohibit police from using searches in the same guise as a Ventra search, “but it is an open question” whether doing so would violate the constitutionally protected rights to freedom of assembly and protection from unauthorized searches, according to Claire Garvey, specialist In surveillance technology with the National Assembly. A criminal defense attorney. Few states have any restrictions on how private entities can use facial recognition.

Advertisement

Los Angeles Police Department Predictive policing program terminatedKnown as PredPol, in 2020 amid criticism that it did not stop and lead to crime Tightening policing in Black and Latino neighborhoods. The software used artificial intelligence to analyze a wide range of data, including suspected gang affiliation, in an effort to predict in real time where property crime might occur.

In the absence of national laws, many police departments and private companies have to weigh the balance of security and privacy on their own.

Senator Edward J. Markey, a Massachusetts Democrat: “This is Orwell’s future coming to life.” “A very disturbing surveillance situation where you are being tracked, flagged and categorized for use by public and private sector entities – of which you have no knowledge.”

Markey plans to reintroduce a bill in the coming weeks that would stop the use of facial recognition and biometric technologies by federal law enforcement and require local and state governments to ban them as a condition of winning federal grants.

Right now, some departments say they don’t have to choose because of reliability concerns. But as technology advances, they will.

Advertisement
Provided by Vintra, a software company based in San Jose "Correlation analysis" to IPVM, a subscriber research group, last year.

Vintra, a San Jose-based software company, provided “correlation analysis” to IPVM, a subscriber research group, last year.

(IPVM)

Vintra executives did not return multiple calls and emails from The Times.

But the company’s CEO, Brent Boekestein, was expanded on the technology’s potential uses during a video presentation with IPVM.

“You can go up here and create a target, based on this guy, and then see who that guy hangs out with,” Boekstein said. “You can really start building a network.”

Advertisement

He added that “96% of the time, there is no event that security cares about, but there is always information that the system generates.”

Four agencies involved with the San Jose transit station used for the Ventra show denied using their cameras to shoot the company’s video.

Two companies listed on Vintra’s website, the 49ers and Moderna, the pharmaceutical company that made one of the most widely used COVID-19 vaccines, did not respond to emails.

Several police departments have acknowledged working with the Ventra, but none have explicitly said they have conducted research into the same guise.

Brian Jackson, Assistant Chief of Police in Lincoln, Nebraska, said his department is using Vintra software to save time analyzing hours of video by quickly looking for patterns like blue cars and other objects that match descriptions used to solve certain crimes. But the cameras his division is associated with — including those from Ring and those used by companies — aren’t good enough to match faces, he said.

Advertisement

“There are limitations. It’s not a magic technology,” he said. “It requires careful input for good output.”

Jarrod Kasner, assistant chief in Kent, Washington, said his department uses Fintra software. He said he was not aware of the shared visibility feature and would have to consider whether it was legal in his state, one of the few that restricts the use of facial recognition.

“We’re always looking for technology that can help us because it’s a power multiplier,” he said, for an administration with staffing problems. But “we just want to make sure we’re within the bounds to make sure we’re doing it right and professionally.”

The Lee County Sheriff’s Office in Florida said it only uses Vintra on suspects and not “to track people or vehicles not suspected of any criminal activity.”

The Sacramento Police Department stated in an email that it uses Vintra “sparingly, if at all” but did not specify if it has used co-visibility before.

Advertisement

“We are in the process of reviewing Vintra’s contract and whether we will continue to use its service,” the department said in a statement, which also said it could not indicate cases in which the software helped solve crimes.

The IRS said in a statement that it is using Vintra software to “more efficiently review lengthy video clips for evidence during criminal investigations.” Officials did not say if the IRS used the shared visibility tool or where the cameras were located, only that it followed “established agency protocols and procedures.”

Jay Stanley, the American Civil Liberties Union attorney who first highlighted the Ventra video presentation last year in blog postHe said he’s not surprised some companies and departments are cautious about using them. In his experience, police departments often deploy new technology “without telling, let alone asking, permission from Democratic supervisors like city councils.”

Stanley warned that the software could be misused to monitor personal and political associations, including with potential intimate partners, labor activists, anti-police groups or party rivals.

The technology is already in use, said Danielle VanZant, who analyzes Ventra for market research firm Frost & Sullivan. Because it has reviewed classified documents from Vintra and other companies, it is subject to non-disclosure agreements that prevent it from discussing individual companies and governments that might use the software.

Advertisement

Retailers, who already collect huge data on the people who enter their stores, are also testing the software to determine “what else can it tell me?” VanZant said.

This could include identifying family members of the bank’s best customers to ensure they are treated well, a use that increases the likelihood that those without wealth or family connections will receive less attention.

“The bias concerns are huge in the industry,” VanZandt said, and they are being actively addressed through standards and testing.

Not everyone thinks this technology will be widely adopted. Law enforcement and corporate security agents often find that they can use less invasive techniques to obtain similar information, said Florian Matusic of Genetec, a video analytics company working with Vintra. This includes scanning ticket entry systems and mobile data that have unique features but are not associated with individuals.

“There’s a big difference between, like, product sheets and demo videos and things that get deployed in the field,” Matusic said. “Users often find that other technologies can also solve their problem without going through or jumping through all the hoops of installing cameras or dealing with privacy regulation.”

Advertisement

Matusic said he doesn’t know of any Genetec customers who use the common theme, which his company doesn’t offer. But he couldn’t rule it out.

Source link

Continue Reading

Tech

What does religion say about artificial intelligence?

Published

on

By

Sometimes Rabbi Joshua Franklin knows exactly what he wants to talk about in his weekly Shabbat sermons — and other times, not so much. On one of those unseasonably cold afternoons in late December, the spiritual leader of a Jewish center in the Hamptons decided to turn to artificial intelligence.

Franklin, the 38-year-old with dark wavy hair and a friendly vibe, knew OpenAI’s new ChatGPT could write sonnets in the style of Shakespeare and songs in the style of Taylor Swift. Now, he wondered if he could write a sermon in the style of a rabbi.

So he gave the prompt: “Write a sermon, in the voice of a rabbi, about 1,000 words, and relate this week’s Torah portion to the idea of ​​intimacy and vulnerability, quoting Brené Brown”—the best-selling author and scholar best known for her work on vulnerability, shame, and empathy.

Advertisement

The score, which he shared that evening in the synagogue’s modern blond wooden shelter and later Posted on VimeoIt was a coherent, if repetitive, hadith, and many in its congregation surmised that it was coined by famous rabbis.

“You applaud,” Franklin said, after revealing that the sermon he had just delivered was composed on a computer. “I am terrified.”

As experiences like Franklin’s and the latter’s troubling conversation Between a tech columnist and a new Microsoft chatbot showing how some AI software has become eerily humanoid, thinkers and religious institutions are increasingly wading into the conversation about the ethical uses of a rapidly expanding technology that may one day develop an awareness of its own. — at least according to its Silicon Valley messengers. Invokes a wide range of superstitions from Icarus to Babel Tower As for the story of a genie who can grant all of our wishes with disastrous results, they sound an ancient warning about what happens when humans try to play God.

Before delivering a sermon written by ChatGPT, Rabbi Franklin told his followers that what he was about to read was plagiarized.

“Friends,” he began, reading from the AI-written sermon, “As we gather today to study the Torah portion of the week, VaigashLet’s reflect on the importance of developing intimacy in our relationships with others.”

Advertisement

The robotic cues continued to tell the story of when many years later Joseph, the son of Jacob, met his brothers. Although they had betrayed him in the past, Joseph received them with warmth and love.

“By approaching them with openness and fragility,” Franklin read, “he is able to heal old wounds and create deeper and more meaningful bonds with his siblings.” “This is a powerful lesson for all of us.”

It was a proper sermon, but not one Franklin was going to write. He later said, “What we missed was the idea of ​​how to find God in meaningful encounters with others.” “How Society Creates God’s Relationship in Our Lives.” In other words, the feeling that the sermon has arisen from the lived experience of human longing, striving, and suffering rather than an arithmetic formula.

It is possible that spiritual leaders will one day be replaced by robots as artificial intelligence continues to improve (anything is possible).

But most religious scholars say other ethical concerns about AI are more pressing. They worry about rising financial inequality as automation wipes out thousands of jobs, and they question our ability to exercise free will as we increasingly rely on computer algorithms to make decisions for us in medicine, education, the justice system, and even how we drive and what we watch on TV.

Advertisement

On an existential level, the better AI becomes at mimicking human intelligence, the more it calls into question our understanding of sentience, consciousness, and what it means to be human. Do we want AI-powered robots to become our servants? Will they have feelings? Should we treat them as if they did?

These ethical dilemmas may seem new, but at their core they represent issues that religious traditions such as Judaism, Islam and Christianity have grappled with for thousands of years, religious leaders say.

While religious institutions have not always acted ethically in the past, they have centuries of experience analyzing moral conundrums through the lens of their own belief systems. the father. James Keenana Catholic theologian at Boston College.

“There are certain ways you can say that all of these great traditions are problematic,” he said, “but they also have their ideas and their wisdom.” “They have a history behind them that is worth taking advantage of.”

Since the early days of artificial intelligence research in the 1950s, the desire to create humanlike intelligence has been compared to the legend of a golem, a mythical creature from Jewish folklore, created by powerful rabbis from clay and magic to do the bidding of its master. . The most famous golem is the one allegedly made by Rabbi Judah le ben Bezull of Prague in the 16th century to protect the Jewish people from anti-Semitic attacks. The golem was also the inspiration for Mary Shelley’s Frankenstein.

Advertisement

For centuries, the idea of ​​a living, man-made creature lacking a divine spark or soul was part of the Jewish imagination. The Rabbis argued about whether a golem could be considered a person, if it could be counted in a minyan, (the quorum of 10 men required for traditional Jewish public prayer), if it could be killed, and how it should be dealt with.

Through these rabbinical discussions, a moral stance on artificial intelligence emerged long before computers were invented Nashson GoltzProfessor of Law at Edith Cowan University in Australia The Jewish Perspective on Artificial Intelligence. While it is permitted to create artificial entities to assist us in our tasks, “we must remember our responsibility to maintain control over them, not the other way around,” he wrote.

Rabbi Eliezer Simcha Weiss, a member of the Chief Rabbis in Israel, echoed this idea in a recent speech. “In every golem story,” he said, “the golem is finally destroyed or dismantled.” In other words, the lesson the rabbis teach is that anything made by man must be controlled by man.

The Rabbis also concluded that while a golem cannot be considered a perfect person, it is still important to treat it with respect.

“The way we deal with these things affects us,” said Goltz. “The way we engage with them determines the development of our personalities and determines the future course of our exercise of moral agency.”

Advertisement

Another cautionary tale from Jewish and Islamic folklore is about the Djinn, a non-human entity made of smokeless fire, who can sometimes be bound to humans and chained to their will. This is the origin of the story of the genie who can give us anything we want, but can’t put it back in the bottle.

“The stories of the genie are an example of what happens when you ask a non-human person to fulfill a human’s desires,” he said Damian WilliamsProfessor of Philosophy and Data Science at the University of North Carolina at Charlotte. “What comes out on the other side seems shocking and punishing, but if you actually track it down, they simply grant those desires to the fullest extent of their logical effects.”

Islam provides another ethical lens through which to look at the development of artificial intelligence. One of the legal principles in Islamic jurisprudence states that warding off harm always takes priority over reaping benefits. In this view, technology that helps some people and puts others out of a job would be considered unethical.

“Most of these technologies are designed and deployed in many cases for, and the damage that accumulates is sometimes a possibility,” Junaid Qader, Professor of Electrical Engineering at Qatar University who organized a conference on Islamic Ethics and Artificial Intelligence. “We don’t know what it will be. Technology has its unintended effects.”

In general, he said, Islamic traditions encourage a cautious approach to new technology and its uses Asim PadillaProfessor of emergency medicine and bioethics at the Medical College of Wisconsin.

Advertisement

He said, “Things that try to make you rival God are not seen as a goal to pursue.” “In trying to seek immortality through brain transfer, or making a better body than the one you have, these motives must be examined. Immortality is in the afterlife, not here.”

Rule of Saint Benedict“, a book written in the sixth century as a guide to monastic life, provides an answer to questions about how we should morally interact with AI, both now and in the future when we encounter robots with humanoid traits,” said Noreen Herzfeld, professor of theology and computer science at St. John’s University and St. Benedict’s College. in Minnesota.

In the section of the book addressing the cellarer—in charge of the convent’s provisions—Saint Benedict tells the cellarer to treat all who come to him with a kind word, and to treat all the inanimate objects in his storeroom “as if they were holy vessels of the altar.”

“For me, this is something we can apply to AI,” Herzfeld said. “People always come first, but we must treat AI with respect, with care, because all earthly things must be treated with respect. The way you treat things is part of what informs your character, informs you of how you treat Earth and other humans.”

The Catholic Church has been particularly vocal in pushing for an AI ethic that benefits humanity, centers human dignity, and whose sole aim is not greater profit or the gradual replacement of people in the workplace.

Advertisement

Pope Francis said in a November 2020 video Declaring his intention to pray that robots and artificial intelligence may always serve humanity.

The Vatican said the Vatican’s goal is not to slow the development of artificial intelligence, but that the church believes caution is necessary Paolo BenanteHe is a Franciscan monk and one of the Pope’s chief advisors in the field of new technology.

On the other hand, we do not want to restrict any of the transformative impulses that can lead to great results for humanity; On the other hand, we know that all transitions must have direction,” he wrote in an email. “We have to be aware that if AI is not managed well, it can lead to dangerous or unwanted transitions.”

To this end, the leaders of the Vatican helped formulate Rome call for ethics in artificial intelligence, a pledge first signed in 2020 by representatives of the Pontifical Academy for Life, IBM, Microsoft, and the Italian Ministry of Innovation among others to support the creation of AI technologies that are transparent, inclusive, and neutral. On January 10, leaders from the Jewish and Muslim communities gathered in the Vatican to add their signatures as well.

Asking tech companies to prioritize human goals over corporate interests may seem like an unlikely proposition, but the influence of religious hierarchy on AI ethics should not be underestimated. Beth SinglerProfessor of Digital Religions at the University of Zurich.

Advertisement

“It can help the multitudes of believers to think critically and use their voice,” she said. “The more conversation you have with important charismatic voices like the Pope, you will only increase the possibility that people, at a grassroots level, will appreciate what is going on and do something about it.”

Bananti agreed.

“The billions of believers who inhabit the planet can be a tremendous force for turning these values ​​into something tangible in the development and application of artificial intelligence,” he said.

As for Franklin, the Hamptons rabbi, he said his experience with ChatGPT finally left him with a sense that the rise of artificial intelligence could have a positive side for humanity.

He said that while AI may be able to mimic our words, and even read our emotions, what it lacks is the ability to feel our emotions, understand our pain on a physical level, and connect deeply with others.

Advertisement

“Compassion, love, empathy, that’s what we do best,” he said. “I believe that GPT chat will force us to hone these skills and, God willing, become more human.”

Source link

Continue Reading

Trending