Want smarter insights in your inbox? Sign up for our weekly s to get only what matters to enterprise AI, data, and security leaders.Subscribe Now
With the release OpenAI’s Chat GPT-5, the world is one step closer to unleashing a general-purposesuperintelligencethat can cognitively outperform each of us by a wide margin. As this day nears, I am increasingly worried that we are woefully unprepared for the shockwaves this will send through society — and it’s probably not for the reasons you expect.
Try this little experiment: Ask anyone you know if they are concerned about AI, and they will likely share avariety of fears, from massive disruptions in the job market and the reality-bending impacts of deepfakes, to the unprecedented power being concentrated in a handful of large AI companies. In other words, most people have never honestly imagined what their life will really feel likethe day aftersuperintelligence becomes widely available.
As context,artificial superintelligence(ASI) refers to systems that can outthink humans on most fronts, from planning and reasoning to problem-solving, strategic thinking and rawcreativity. These systems will solve complex problems in a fraction of a second that might take the smartest human experts days, weeks or even years to work through. This terrifies me, and it’s not because of the doomsday scenarios that dominate our public discourse.
No, I am worried about the opposite risks — the dangers that could emerge in thebest-case scenarioswhere superintelligence is helpful and benevolent. Such an ASI will have many positive impacts on society, but it could also bedeeply demoralizingto our core identity as humans. After all, the world will feel different when each of us knows that a smarter, faster, more creative intelligence is available on our mobile devices than between our own ears.
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
- Turning energy into a strategic advantage
- Architecting efficient inference for real throughput gains
- Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead:
So ask yourself, honestly, how will humans act in this new reality? Will we reflexively seek advice from our AI assistants as we navigate every little challenge we encounter? Or worse, will we learn totrust our AI assistantsmore than our own thoughts and instincts?
Wait — before you answer, you must update your mental model. Currently, we engage AI through a Socratic framework that requires us to ask questions and get answers (like Captain Kirk didaboard the Enterprisein 1966). But that’s old-school thinking. We are now entering a new era in which AI assistants will be integrated into body-worn devices that are equipped with cameras and microphones, enabling AI to see what you see, hear what you hear andwhisper advice into your earswithout you needing to ask.
In other words, our future will be filled with AI assistants that ride shotgun in our lives,augmenting our experienceswith optimized guidance at every turn. In this world, the risk is not that we reflexively ask AI for advice before using our own brains; the risk is that we won’t need to ask – the advice will just stream into our eyes and ears, shaping our actions,influencing our decisionsand solving our problems before we’ve had a chance to think for ourselves.
‘Augmented mentality’ will transform our lives
I refer to this framework as ‘augmented mentality‘ and it is about to hit society at scale through AI-powered glasses, earbuds and pendants. This is the future of mobile computing, and it is already driving an arms race between Meta, Google, Samsung and Apple, as they position themselves to produce thecontext-aware AI devicesthat will replace handheld phones.
Imagine walking down the street in your town. You see a coworker heading towards you. You can’t remember his name, but your AI assistant does. It detects your hesitation andwhispers the coworker’s nameinto your ears. The AI also recommends that you ask the coworker about his wife, who had surgery a few weeks ago. The coworker appreciates the sentiment, then asks you about your recent promotion, likely at the advice of his own AI.
Is this human empowerment, or a loss of human agency?
It will certainly feel like asuperpowerto have an AI in your ear that always has your back, ensuring you never forget a name, always have witty things to say and are instantly alerted when someone you’re talking tois not being truthful. On the other hand, everyone you meet will have their own AI muttering in their own ears. This will make us wonder who we’rereallyinteracting with — the human in front of us, or the AI agent giving them guidance (check outCarbon Datingfor fun examples).
Many experts believe that body-worn AI assistants will make us feel more powerful and capable, but that’s not the only way this could go. These same technologies could make us feel less confident in ourselves and less impactful in our lives. After all, human intelligence is the defining feature of humanity, the thing we take most pride in as a species, yet we could soon find ourselves deferring to AI assistants because we feel mentally outmatched. Is this empowerment — an AI thatbotsplainsour every experience in real time?
I raise these concerns as someone who has spent my entire career creating technologies thatexpand human abilities. Frommy early workdeveloping augmented reality to my current work developingconversational agentsthat make human teams smarter, I am a firm believer that technology can greatly enhance human abilities. Unfortunately, when it comes to superintelligence, there is a fine line between augmenting our human abilities andreplacing them. Unless we are thoughtful in how we deploy ASI, I fear we will cross that line.
Louis Rosenberg is an early pioneer of virtual and augmented reality and a longtime AI researcher. He founded Immersion Corp, Outland Research and Unanimous AI.
Source: Venturebeat



