What if the real risk of AI is not deepfakes, but everyday whispers?

What if the real risk of AI is not deepfakes, but everyday whispers?

4 minutes, 52 seconds Read

Most people are unaware of the profound threat that AI will soon pose human intervention. That is a common refrain “AI is just a tool” and like any tool, its benefits and dangers depend on how people use it. This is old-fashioned thinking. AI is going through a transition from instruments we use Unpleasant prosthetics that we wear. This will create important new threats that we are simply not prepared for.

No, I’m not talking about scary brain implants. These AI-powered prosthetics will be mainstream products we buy from Amazon or the Apple Store, marketed with friendly names like “assistants,” “coaches,” “co-pilots,” and “teachers.” They will provide real value in our lives – so much so that we will feel disadvantaged if others wear them and we don’t. This will create rapid pressure for mass adoption.

The prosthetics I’m talking about are “AI-powered wearableslike smart glasses, pendants, pins, and earbuds. Your wearable AI sees what you see and hears what you hear, while keeping track of where you are, what you’re doing, who you’re with, and what you’re trying to accomplish. Then, without you having to say a word, these mental tools will whisper advice in your ears or flash guidance for your eyes.

The difference between an aid and a prosthesis may seem subtle, but the implications for human agency are profound. This can best be understood by a simple analysis of input and output. A tool takes human input and generates amplified output. A tool can make us stronger and faster or let us fly. A mental prosthesis, on the other hand, forms a feedback loop around the human, accepts input from the user (by monitoring their actions and engaging them in conversation) and generates output that can influence immediately the user’s thinking.

This feedback loop changes everything. That’s because body-worn AI devices can monitor our behavior and emotions and use this data to do so talk to us believing things that aren’t true, buying things we don’t need, or adopting positions that we might otherwise realize are not in our best interests. This is called the AI ​​manipulation problemand we are not ready for the risks. This is an urgent issue as major tech companies rush to bring these products to market.

Why are feedback loops so dangerous?

In today’s world, all computer devices are used to exert targeted influence on behalf of paying sponsors. Wearable AI products are likely to continue this trend. The problem is that these devices easily create a “influence the objective” and are tasked with optimizing their impact on the user, adapting their conversation tactics to overcome any resistance they discover. This transforms the concept of targeted influence from social media shots to heat-seeking missiles that expertly navigate past your defenses. And yet policymakers fail to realize this risk.

Unfortunately, most regulators still see the danger of AI in terms of its ability to quickly generate traditional forms of influence (deepfakes, fake news, propaganda). Of course, these are significant threats, but they are not nearly as dangerous as the interactive and adaptive influence that could soon be widely deployed via conversational agents, especially when those AI agents travel with us through our lives in wearable devices.

This is coming soon

Meta, Google and Apple are racing to launch wearable AI products as quickly as possible. To protect the public, policymakers must abandon their tool-use framework when regulating AI devices. This is difficult because the tool-use metaphor goes back 35 years to when Steve Jobs colorfully described the PC as a “bicycle of the mind.” A bicycle is a powerful tool that keeps the rider in control. Wearable AI will turn this metaphor on its head, making us wonder who’s driving the bike –humans, the AI ​​agents whispering in humans’ ears, or the companies that deployed the agents? I believe it will be a dangerous mix of all three.

Furthermore, users are likely to rely on the AI voices in their heads more than they should. That’s because these AI agents will provide us with useful advice and information throughout our daily lives – teaching us, reminding us, coaching us and informing us. The problem is that we may not be able to distinguish when the AI ​​agent has shifted its goal from helping us to influencing us. To appreciate the difference, watch the award-winning short film Privacy lost (2023) on the dangers of AI-powered wearable devices. This is especially the case when devices include invasive features such as facial recognition (which Meta reportedly adds to their glasses).

What can we do to protect the public?

First and foremost, policymakers need to realize that conversational AI makes this possible a completely new form of media that is interactive, adaptive, individualized and increasingly context-aware. This new form of media will function as “active influence,” as it can adapt its tactics in real time to overcome user resistance. When deployed in wearable devices, these AI systems could be designed to manipulate our actions, sway our opinions, and influence our beliefs – all through seemingly informal dialogue. Worse yet, these agents will learn over time which conversation tactics work best for each of us on a personal level.

It’s a fact, conversation agents must not form control loops around users. If this is not regulated, AI will be able to influence us with superhuman persuasion. Furthermore, AI agents should be obliged to inform users whenever they move to express promotional content on behalf of a third party. Without such protections, AI agents will likely become so persuasive that current targeted influence techniques will look silly.

Louis Rosenberg is a pioneer in the field of augmented reality and a long-time AI researcher. He received his PhD from Stanford, was a professor at California State University and wrote several books on the dangers of AI, including Arrival Mind and Our Next Reality.

#real #risk #deepfakes #everyday #whispers

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *