Artificial Intelligence

What can we learn from Hollywood about our possible AI future?

A few weeks ago I promised I'll write about what we can learn from Hollywood about our potential future with AI. To kick off the series, I want to begin with something quite simple, yet very important to understand if we are to think of AI properly: it is not a tool!

You've probably read a lot about "AI agents" lately but I find that there is still a lot of confusion about what AI agents really are, or more precisely - what they should be? Let's turn to movies for help...

AI is an agent and not a tool

In the movie Elysium, the Earth has experienced the worst case scenario of overpopulation and pollution. Conveniently enough, the wealthy have built themselves a space station, called Elysium, and evacuated, leaving the barren Earth to the less fortunate. And while they reside in the heavens, the wealthy class remains in control of the Earth economy.
The rest of the population is stuck on Earth and participates in the economy, mainly as a source of labour. One of the main social determinants, factors which distinguish the "wealthy" from the "poor", is access to healthcare.

The hero protagonist of the story, played by Matt Damon, is a paroled convict who is trying to make ends meet. He works in a factory which assembles AI powered robots, and through a series of unfortunate events he ends up taking up a lethal dose of radiation at work. He is given a few days to live, setting him on an existential adventure to try and reach Elysium and save himself from certain death.

In one early scene (Elysium - Parole Officer Scene), and before he is given a death sentence, Matt Damon's character is visiting his parole officer, who is a lifeless, almost comical figurine powered by a chatbot. The chatbot is clearly built on AI technologies as it can understand speech and respond in natural language.
However, the interaction is completely "linear". The chatbot refuses to divert from a very well defined script no matter what Matt Damon says or asks, as he struggles to explain his situation to a figurine which keeps telling him to stop talking. The scene naturally ends in frustration.

Sounds familiar? Raise your hand if you've tried to talk to a chatbot deployed before 2023 and haven't angrily typed the words: "I want to talk to a human!!!" 2 minutes into the interaction. No one ...?

The vision of AI portrayed in this scene is essentially a misunderstood view of AI technology. There is a challenge, in this case a systemic lack of parole officers, so we decided will build new digital tools to make the specific process of parolee interview more efficient (spend less time per interview, for instance).
The process itself remains fixed and rigid, and one could argue in this case mostly useless. The objective of the chatbot is not to improve the outcome of the interview, reduce the rate of repeated offences, improve organisational resilience, etc.

The goal is to improve a narrow set of KPIs, in this case the time needed to process a parolee. And to achieve this goal, we don't need AI powered solutions, only AI powered software. The chatbot is defined by the process flow, is able to execute it quickly, and the user should just learn to give it inputs that it needs without much wiggle room.

We are, of course, allowed to think about AI in this way. We can build new digital tools which will be based on AI, and AI can certainly enhance the capabilities of software. I find this to be a common theme in a lot of misguided digital transformation attempts:

  1. We define solutions to make a process more efficient, but we don't challenge the process or the organisation structure.
  2. We expect users to adapt to the technology and not the other way around.

Now let's switch to another movie: "Avengers: End Game"

In this one, half of the Universe has been erased at a flick of a finger of the antagonist called Thanos, a power he acquired by collecting something called "infinity stones". I don't know... the whole thing never made much sense, even by implying all the creative freedom.
The Avengers, a group of regular and super-humans who survived the "erasure", set on a mission to travel back in time and kill Thanos before he can flick his fingers.

1 star out of 5 for creativity on this one, btw. Time travel is a half-hearted solution to every story which lacks vision. Also, Superman jumped the shark with it already, but I digress.

There is one small challenge Avengers are facing: they don't know how to travel back in time. Enter Tony Stark, a flamboyant billionaire/genius who likes to spend his free time flying around in a red metal suit and fight alien invaders.
Tony has an AI powered assistant, JARVIS (Just a Rather Very Intelligent System) , who can perform a multitude of tasks for him, from turning on the music to performing complex engineering computations. Jarvis is something like Alexa with a PhD in Mathematics, Physics, Chemistry and Engineering who also likes to rock.

In one scene, Tony Stark is trying to solve time travel (Avengers End Game - Tony Stark Solves Time Travel). He starts with a "mild inspiration" that he'd like to check out. He proceeds to delegate a complex mathematical task to JARVIS to solve. A physicist in me cringes every time I hear Stark say "Give me the eigenvalue of that particle, factoring in spectral decomp", but you know... Hollywood.
Anyway, JARVIS goes on to compute with a "just a second" note, and before you know it, Tony Stark falls on his bottom in shock. Time travel solved!

The vision of JARVIS is clearly not that of a digital tool. It is a versatile entity capable of using a variety of (digital) tools, as well as communicating in natural language. The only thing that is well defined with JARVIS is its role: Tony Stark's loyal assistant. The way JARVIS and Stark interact in this scene is very much the way an R&D team leader would communicate to a grad student: Stark drives the vision, while tasks are delegated to JARVIS, while the end goal is to reach a common objective.
Stark is not using JARVIS to make a computation, he is collaborating with JARVIS to solve time travel. The level of abstraction of tasks is also very deep. Jarvis understands advanced concepts such as "spectral decomp" etc. and can make decisions and design processes which allow him to accomplish the tasks.

What is perhaps most fascinating is that Stark and JARVIS alone don't make much sense without each other: Stark would likely never be able to do the complex computations in time if it weren't for JARVIS, and there is no evidence that JARVIS could solve time travel without the "mild inspirations" of Stark. The two function as a mutually enhancing team!

Stark also doesn't have to do much to adapt to JARVIS, there is very little onboarding necessary. This gives Stark much needed time to think about the core objectives and the big picture instead of being bogged down with procedural details.

When we talk about AI as an agent, what we mean is something that is very much not like the parole officer from Elysium and is more like JARVIS from Avengers. AI can, and should serve as an enhancing agent to humans, where the two work together in ways which are more than the sum of the parts.
Such a collaborative framework allows humans and technology to function on the level of objectives rather than tasks and processes, a very different paradigm compared to traditional digital tooling!

What if I now told you that, neglecting all the CGI and Hollywood flair, we are quite close to building collaborative agents (or assistants as you will) such as JARVIS with current technology?

If you've ever tried to analyse an Excel file with ChatGPT, you already know this is the case. As an exercise I gave ChatGPT a CSV file which contains open data on global energy production. I asked it, in plain English, to manipulate data, make visualisations and even produce forecasts using linear regression models. ChatGPT, much like JARVIS, was able to both understand advanced concepts such as "regression" and "plot" and use computational tools (e.g. Python) to perform the necessary tasks. It sometimes made a mistake.
I gave it hints and suggestions and it corrected. In mere minutes, by collaborating with ChatGPT, I was able to produce insight about world energy production which would have taken me significantly longer if I didn't have the assistant by my side.

I know what you may be thinking: a few drawings and a CVS file is a far cry from solving time travel. The point is not the objective itself, but the fact that AI is defined by it's capabilities as much as it's defined by the way we interact with it.
Approaching it as a tool may be familiar, but it is a lost opportunity. With AI, we have the opportunity to rethink how we design and interact with technology in ways which enhances us! Why shouldn't we take it?

Viewing AI through the lens of "another digital tool" will fail. AI Agents are already making a lot of buzz, but the vision of what an AI agent is has not crystallised yet. What defines an agent? How should AI agents and human agents interact? How should AI agents interact with each other? How should the hybrid AI/human teams work? These are still very much open questions.

We'll leave them for the next iteration of this series...

In the meantime don't forget: The AI future belongs to us!

Mihailo_Backovic_SQ-b464a1f5.jpg

Mihailo Backovic
Ready to start shaping the business of tomorrow?


Enjoyed this insight?
Share it to your network.

More insights

Your
future
starts
today

Required
Required
Required