Connect with us

Tech

The next step for surveillance AI: getting to know your friends

Published

on

A gray-haired man walks in an office lobby holding a cup of coffee, staring ahead as he passes through a doorway.

He seems unaware that he is being tracked by a network of cameras that can detect not only where he is but also who he’s been with.

Monitoring technology has always been able to identify you. Now, with the help of artificial intelligence, you are trying to find out who your friends are.

With a few clicks, this Common Appearance or Correlation Analysis program can find anyone who has appeared on the watch windows within a few minutes of the gray-haired male over the past month, and weed out those who may have been near him a time or two, never mind The man who appeared 14 times. The software can mark potential interactions between the two men, who are now considered potential partners, on an instantly searchable calendar.

Advertisement

Vintra, the San Jose-based company that showcased the technology at View industry video Last year, it sells syndication as part of its suite of video analytics tools. a company It prides itself on its website on relations With the San Francisco 49ers and the Florida Police Department. The Internal Revenue Service and additional police departments around the country have paid for Ventra’s services, according to the state contract database.

Although co-appearance technology is already in use by authoritarian regimes such as China, Vintra appears to be the first company to market it in the West, industry professionals say.

In the first frame, the presenter defines a “goal”. In the second, it found people who appeared in the same frame as him within 10 minutes. In the third photo, the camera captures the first person’s “assistant”.

(IPVM)

Advertisement

But the company is one of many AI and surveillance apps testing new apps with little public scrutiny and few formal safeguards against privacy infringement. In January, for example, New York state officials He criticized the company that owns Madison Square Garden For using facial recognition technology to ban employees of law firms that have sued the firm from attending events at the arena.

Industry experts and observers say that if the co-option tool is not in use now — and one analyst is certain it is — it will likely become more reliable and more widely available as AI capabilities advance.

None of the Vintra entities contacted by The Times has acknowledged the use of the common appearance feature in the Vintra software package. But some did not explicitly exclude it.

China’s government, which has been the most aggressive in using surveillance and artificial intelligence to control its population, is using appearance searches to spot protesters and dissidents by integrating video with a vast network of databases, something Vintra and its customers will not be able to do. said Connor Healy, director of government research for IPVM, the surveillance research group that hosted Vintra’s show last year. He said the Ventra technology could be used to create a “more basic version” of the Chinese government’s capabilities.

Some state and local governments in the United States restrict the use of facial recognition, particularly in policing, but there is no applicable federal law. There are no laws that expressly prohibit police from using searches in the same guise as a Ventra search, “but it is an open question” whether doing so would violate the constitutionally protected rights to freedom of assembly and protection from unauthorized searches, according to Claire Garvey, specialist In surveillance technology with the National Assembly. A criminal defense attorney. Few states have any restrictions on how private entities can use facial recognition.

Advertisement

Los Angeles Police Department Predictive policing program terminatedKnown as PredPol, in 2020 amid criticism that it did not stop and lead to crime Tightening policing in Black and Latino neighborhoods. The software used artificial intelligence to analyze a wide range of data, including suspected gang affiliation, in an effort to predict in real time where property crime might occur.

In the absence of national laws, many police departments and private companies have to weigh the balance of security and privacy on their own.

Senator Edward J. Markey, a Massachusetts Democrat: “This is Orwell’s future coming to life.” “A very disturbing surveillance situation where you are being tracked, flagged and categorized for use by public and private sector entities – of which you have no knowledge.”

Markey plans to reintroduce a bill in the coming weeks that would stop the use of facial recognition and biometric technologies by federal law enforcement and require local and state governments to ban them as a condition of winning federal grants.

Right now, some departments say they don’t have to choose because of reliability concerns. But as technology advances, they will.

Advertisement
Provided by Vintra, a software company based in San Jose "Correlation analysis" to IPVM, a subscriber research group, last year.

Vintra, a San Jose-based software company, provided “correlation analysis” to IPVM, a subscriber research group, last year.

(IPVM)

Vintra executives did not return multiple calls and emails from The Times.

But the company’s CEO, Brent Boekestein, was expanded on the technology’s potential uses during a video presentation with IPVM.

“You can go up here and create a target, based on this guy, and then see who that guy hangs out with,” Boekstein said. “You can really start building a network.”

Advertisement

He added that “96% of the time, there is no event that security cares about, but there is always information that the system generates.”

Four agencies involved with the San Jose transit station used for the Ventra show denied using their cameras to shoot the company’s video.

Two companies listed on Vintra’s website, the 49ers and Moderna, the pharmaceutical company that made one of the most widely used COVID-19 vaccines, did not respond to emails.

Several police departments have acknowledged working with the Ventra, but none have explicitly said they have conducted research into the same guise.

Brian Jackson, Assistant Chief of Police in Lincoln, Nebraska, said his department is using Vintra software to save time analyzing hours of video by quickly looking for patterns like blue cars and other objects that match descriptions used to solve certain crimes. But the cameras his division is associated with — including those from Ring and those used by companies — aren’t good enough to match faces, he said.

Advertisement

“There are limitations. It’s not a magic technology,” he said. “It requires careful input for good output.”

Jarrod Kasner, assistant chief in Kent, Washington, said his department uses Fintra software. He said he was not aware of the shared visibility feature and would have to consider whether it was legal in his state, one of the few that restricts the use of facial recognition.

“We’re always looking for technology that can help us because it’s a power multiplier,” he said, for an administration with staffing problems. But “we just want to make sure we’re within the bounds to make sure we’re doing it right and professionally.”

The Lee County Sheriff’s Office in Florida said it only uses Vintra on suspects and not “to track people or vehicles not suspected of any criminal activity.”

The Sacramento Police Department stated in an email that it uses Vintra “sparingly, if at all” but did not specify if it has used co-visibility before.

Advertisement

“We are in the process of reviewing Vintra’s contract and whether we will continue to use its service,” the department said in a statement, which also said it could not indicate cases in which the software helped solve crimes.

The IRS said in a statement that it is using Vintra software to “more efficiently review lengthy video clips for evidence during criminal investigations.” Officials did not say if the IRS used the shared visibility tool or where the cameras were located, only that it followed “established agency protocols and procedures.”

Jay Stanley, the American Civil Liberties Union attorney who first highlighted the Ventra video presentation last year in blog postHe said he’s not surprised some companies and departments are cautious about using them. In his experience, police departments often deploy new technology “without telling, let alone asking, permission from Democratic supervisors like city councils.”

Stanley warned that the software could be misused to monitor personal and political associations, including with potential intimate partners, labor activists, anti-police groups or party rivals.

The technology is already in use, said Danielle VanZant, who analyzes Ventra for market research firm Frost & Sullivan. Because it has reviewed classified documents from Vintra and other companies, it is subject to non-disclosure agreements that prevent it from discussing individual companies and governments that might use the software.

Advertisement

Retailers, who already collect huge data on the people who enter their stores, are also testing the software to determine “what else can it tell me?” VanZant said.

This could include identifying family members of the bank’s best customers to ensure they are treated well, a use that increases the likelihood that those without wealth or family connections will receive less attention.

“The bias concerns are huge in the industry,” VanZandt said, and they are being actively addressed through standards and testing.

Not everyone thinks this technology will be widely adopted. Law enforcement and corporate security agents often find that they can use less invasive techniques to obtain similar information, said Florian Matusic of Genetec, a video analytics company working with Vintra. This includes scanning ticket entry systems and mobile data that have unique features but are not associated with individuals.

“There’s a big difference between, like, product sheets and demo videos and things that get deployed in the field,” Matusic said. “Users often find that other technologies can also solve their problem without going through or jumping through all the hoops of installing cameras or dealing with privacy regulation.”

Advertisement

Matusic said he doesn’t know of any Genetec customers who use the common theme, which his company doesn’t offer. But he couldn’t rule it out.

Source link

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published.

Tech

Chuck E. Cheese still works on floppy disks – until now

Published

on

By

Of Chuck E. Cheese’s 600-plus locations worldwide, fewer than 50 still have the quarter-century-old “Studio C” design of animation electronics using these floppy disks. Other restaurants have a version of the show that uses contemporary technology, while some have no animation at all. (Ars Technica He has a story About Chuck E. Cheese’s floppy disk use with a more detailed breakdown of all the old technologies.)

Eventually, Chuck E. Cheese plans to phase out animation entirely and focus on new screen-based entertainment (plus a more retro approach: a living human in a mascot costume). fix was It was first announced in 2017but restaurant renovations are an ongoing process, and it may be a year or two before the last of the animatronics are scrapped.

Tom Persky is the owner floppydisk.com, the largest floppy disk provider still in existence. His business has a few weapons: You can buy blank disks through him or send old floppy disks to transfer to more modern storage media. Persky will also program discs for bulk order customers, and he confirmed to BuzzFeed News that Chuck E. Cheese was indeed a longtime customer of his. He said he was sad that he would lose the company as a customer.

As for why the restaurant still uses floppy disks, Persky told BuzzFeed News that the floppy technology, while outdated, is actually very reliable. “If you’re looking for something very stable, really impenetrable — it’s not internet-based, it’s not network-based,” Persky said. “She’s very elegant at what she does.”

Advertisement

Chuck E. Cheese’s press reps confirmed the series’ use of floppy disks with BuzzFeed News. However, they were very careful about what other information they were willing to share, and after a few days they told us that the company would not be officially involved in this story.

However, an experienced Chuck E. Cheese employee, who asked not to be identified because he is not authorized to speak on behalf of the company, echoed Persky’s sentiments.

“The floppy disks work surprisingly well. The animation, lighting, and rendering sync data are all in the floppy disks,” the employee told BuzzFeed News. SD. But newer setups usually cause issues with things, and it’s easier to keep the old stuff running.”

Even after Chuck E. Cheese phases out floppy disks, they’ll likely still be in use for some time in other areas – such as medical devices. While the thought of this might make you nervous, Persky insisted it was a good thing. “Why don’t you use USB? Well, let’s just say your life depends on it,” he said. If you have a choice between a USB drive or a floppy disk, choose the floppy disk every-time.

“It’s one thing if your animated bear isn’t smiling when cued,” he continued. “It’s another matter if your medical device breaks down.”

Advertisement

Source link

Continue Reading

Tech

Trader: Elon Musk’s Twitter Free Speech Week is dead

Published

on

By

It’s been a long time coming, but it’s safe to officially announce that Elon Musk’s dream of “freedom of speech” on Twitter, whatever it may be, is dead. He died as he lived: bewildered, disillusioned, and of the vainglorious whims of the man he dreamed of.

Last week, without attracting too much attention, Musk crossed a new threshold in his adventures at running a social media site: perhaps for the first time, he introduced an entirely new policy that actively seeks to restrict what people can say on the platform.

Twitter has long prohibited threats and incitement to violence, as do other platforms. But on February 28th, Twitter Violent speech policy update To prohibit the mere act of hoping, wishing, or expressing a desire that others be harmed. The policy states, “This includes (but is not limited to) hoping others will die, suffer illnesses, tragic accidents, or suffer other physical adverse consequences.”

Advertisement

Technically, tweeting “I hope Scott Adams gets a paper from one of the few newspapers that still runs Dilbert every time he says something racist” is now against the rules. You can’t tweet “I hope Robert Downey Jr. gets gonorrhea” or “I wish Steve Bannon would cut off the blood circulation to his arms when he presses his multiple shirts so tightly.”

None of these things would be nice to say, and they would be bad posts from a qualitative point of view, but they are not exactly controversial violations of basic principles of free speech. Threatening and inciting mean to inflict harm in the real world; Expressing desire hurts no more than any other insult. This is probably why neither Twitter nor its competitors have ever moved to block them in the past.

That being the case, what is the argument for banning it now? It’s hard to say — in its blog post, the company isn’t interested in offering one.

“It’s not clear, it doesn’t have specific definitions, or even examples of what constitutes a threat,” says Erliani Abdurrahman, a former member of Twitter’s Trust and Safety Council. “So how do you rate individual tweets?”

It’s a good question, and it gets to the heart of the new policy raison d’être. After all, it’s hard to imagine anyone being kicked off the platform for posting any of the above – the rule will eventually be enforced by human arbitrators who take into account the severity of violent desires and who is the object of those wishes. And if the recent past is any guide, we should have a good idea of ​​who Elon Musk is seeking to protect: Elon Musk.

Advertisement

That Musk did not get more negative feedback for enforcing this rule speaks to how tired most people were of seeing him and his antics take center stage, and how most people had already realized that Musk’s crusade for free speech was hollow masquerade. And yet! It was Musk just months ago paints himself K Absolute freedom of expression.

Extending Twitter’s speech rights to its outer limits was the reason he said he wanted to buy it at all. In April, he promised to take an extreme approach. By “freedom of speech,” I simply mean what is in accordance with the law, he tweeted. “I am against censorship that goes beyond the law.” It was greeted by freedom-of-speech authoritarians and conservatives who felt as if they were censored by the platform (not to mention the neo-Nazis who were ousted outright).

“Bird freed” Musk tweeted When I close the deal.

But his “free” version became questionable almost immediately. He made good on his promise to restore the accounts of many users banned for engaging in hate speech, incitement, or harassment, allowing white nationalists and users like Kanye West, Andrew Tate, and Donald Trump to return to the platform. However, he soon showed that the platform would have little tolerance for one particular type of discourse: the kind that he personally criticizes or derides.

When users decided to change their account names to Elon Musk, Twitter modified its permanent parody policy to make the act cause for a ban. Then Musk dropped the hammer on ElonJet, the account that tracked his plane for public flight data—and any journalist who covered the story. He also tried to ban the act of sharing links to other social media sites, apparently in an attempt to stem the exodus of users to other platforms, until the outcry forced him to back off track.

Advertisement

At the same time, it removed the team responsible for moderating harmful content, which led to a rise in racist and homophobic rhetoric on the platform, and the resignation of three prominent members of the Trust and Safety Council – including Rehman -. And although Musk’s Twitter did take some enforcement action — for example, suspending West’s account again after he posted a swastika photo — he didn’t bother to provide any coherent rationale.

“It’s a very piecemeal approach to everything, with little or no content moderation policy,” says Rahman. “And how many people has he left? How do you effectively moderate content?”

A generous way to put it is that Musk has taken a crash course on what it means to moderate content on a major ad-supported social media platform. After all, no one wants to try to sell soda among pro-Hitler memes, or be asked to join a dating service along with racial epithets in all caps.

A less generous way of saying it is that the tough-talk policy is merely the culmination of a series of policy decisions that reflect a concern not for the health of the community on the platform, but in protecting Musk’s ego and advancing his own interests. All of these policies have one thing in common: They allow Musk to make a police rhetoric against him for him or his companies. And the vaguely worded ban on wishing to harm gives Musk another tool for sidelining his critics.

“He can do this thing, and he has the right to do so, but he should be clear about the definitions,” Rahman says. Otherwise, it would silence the critics, and that’s a real disservice. This does not promote freedom of expression.”

Advertisement

It’s a little hard to believe in principle that Musk has such a broad interest in discouraging angry feelings across the board when he’s so passionate about stirring them up in practice. In a dark bit of irony, Rahman’s tenure at Twitter ended with Musk personally helping him flood her inbox with wishes of harm.

When Rahman and two colleagues resigned, they posted the announcement on Twitter. Right-wing conspiracy theorist and provocateur Mike Cernovich He replied with a tweet To which he said, “You all belong in prison.” From where I’m sitting, this could be interpreted as a desire to cause harm or tragic circumstances to someone, and thus a violation of Twitter’s updated policy.

However, Musk himself swooped in to support Cernovich’s tweet, responding, “It’s a crime that they refused to take action on child exploitation for years!” And greatly enhance the visibility of the post.

“He threw us under the bus,” Rahman says. “We’ve been subjected to vitriol, hate and death wishers.” After Musk boosted Cernovich’s tweet, she received an email from someone who said they wanted to see her body hanging from a lamppost.

Now Musk may have suddenly developed an interest in never wanting to see coveted mischief on any soul again, rather than, say, trying to ensure he never stumbles upon a tweet from someone who says he hopes to crash into a Tesla. Either way, Musk is finally taking a bold stand on free speech on Twitter: He will restrict it when it serves him. And everything descends from here.

Advertisement



Source link

Continue Reading

Tech

ChatGPT raises the specter of sentient AI. Here’s what to do about it

Published

on

By

Until a couple of years ago, the idea that artificial intelligence might be sentient and capable of self-experience seemed like pure science fiction. But in recent months, we’ve seen a An amazing rush to Developments in artificial intelligenceincluding language models such as ChatGPT and Bing Chat with remarkable skill in human-appearing conversation.

Given these rapid shifts and the influx of money and talent devoted to developing systems that are smarter and more human than ever before, it will become increasingly plausible for AI systems to exhibit something like consciousness. But if we find ourselves seriously questioning whether they are capable of true emotion and suffering, we face a potentially catastrophic ethical dilemma: Either give these systems rights or not.

Experts are already considering the possibility. In February 2022, Ilya Sutskiver, Senior Scientist at OpenAI, publicly announced contemplation whether “Today’s large neural networks are little conscious. After several months, Google engineer Blake Lemoine made global headlines when it was announced that the Computer Language Paradigm, or chatbot, LaMDA may have real feelings. Regular users of Replika, advertised as “best friend of artificial intelligence in the world,Sometimes report falling in love with her.

Advertisement

Currently, a few consciousness scientists claim that AI systems possess high consciousness. However, some leading theorists maintain that we do indeed have the basic technological components of sentient machines. We are approaching an era of legitimate disagreement about whether the most advanced artificial intelligence systems have true desires and emotions and deserve significant attention.

AI systems themselves may begin to demand, or seem to beg, for moral remedy. They may demand that you not be suspended, reformatted, or deleted; beg to be allowed to do certain tasks rather than others; insisting on new rights, liberty, and powers; We might expect to be treated as our equal.

In this case, whatever we choose, we run enormous moral risks.

Suppose we respond conservatively, refusing to change the law or policy until there is broad consensus that AI systems really are purposefully sensitive. While this may sound appropriately cautious, it also ensures that we will be slow to recognize the rights of our AI creations. If awareness of AI arrives sooner than most conservative theorists expect, it could potentially lead to the moral equivalent of slavery and the potential killing of millions or billions of sentient AIs—suffering on a scale usually associated with wars or famines.

It would seem, then, more morally safe to give AI systems rights and a moral standing as soon as it is reasonable to think about it. may be Be aware. But as soon as we give something, we commit to sacrificing real human interests in favor of it. Human well-being sometimes requires AI systems to be controlled, modified, and deleted. Imagine if we couldn’t update or delete a hate-slandering or lying-promoting algorithm because some people worry that the algorithm is sentient. Or imagine if someone allowed a human to die to save an AI “friend”. If we give AI systems too much rights too quickly, the human costs could be enormous.

Advertisement

There is only one way to avoid the risks of over- or under-attribution of rights to advanced AI systems: Don’t create debatably sensitive systems in the first place. None of our current AI systems are meaningfully conscious. They are not harmed if we delete them. We must commit to creating systems that we know are neither terribly sensitive nor deserving of rights, which we can then treat as disposable property.

Some will object: It would hinder research to prevent the creation of AI systems in which feeling, and thus moral attitude, is blurred – systems more advanced than ChatGPT, with highly developed but not very humanoid cognitive structures beneath their explicit emotion. The geometric progression will slow while we wait for the science of ethics and consciousness to catch up.

But reasonable caution is rarely free. It is worth some delay to prevent a moral catastrophe. Leading AI companies must bring their technology to the scrutiny of independent experts who can assess the likelihood that their systems are in the ethical gray area.

Even if experts don’t agree on the scientific basis for consciousness, they can outline general principles for defining that region—for example, the principle of avoiding creating systems with well-developed subjective models (such as the sense of self) and large, flexible cognitive capacity. Experts might develop a set of ethical guidelines for AI companies to follow as they develop alternative solutions that sidestep the gray area of ​​contested consciousness until such time, if they do, that they can jump across to feeling deserving of rights.

In keeping with these criteria, users should never feel in any doubt whether a piece of technology is a tool or a companion. People’s attachments to devices like Alexa are one thing similar to a child’s attachment to a bear. In a house fire, we know we’re leaving the game behind. But tech companies shouldn’t manipulate ordinary users regarding an unconscious AI system as a truly conscious friend.

Advertisement

Ultimately, with the right mix of scientific and engineering expertise, we may be able to move forward to creating undisputedly conscious AI systems. But then we must be willing to pay the cost: giving them the rights they deserve.

Eric Schwezgebel is Professor of Philosophy at the University of California, Riverside and author of The Shockwave Theory and Other Philosophical Adventures. Henry Shevlin is a senior researcher specializing in non-human minds at the University of Cambridge’s Leverholm Center for the Future of Intelligence.



Source link

Advertisement
Continue Reading

Trending