When Ukraine revealed this week that it had deployed AI in its spectacular Spider’s Web drone attack on Russian bombers, the news should not have come as a surprise to those who had been following the rapid adoption of the technology, not only in warfare but at an even more breakneck speed in civilian life.
It took two efforts to get ChatGPT to underwrite the conclusion: We are living through an inflection point in the adoption of AI, similar to the explosion in internet adoption and use-cases in the early to mid-2000s.
Only this time the change will be much deeper and more existential. And the transformation is so much more driven by large corporations, rather than the early, somewhat anarchistic days of the internet.
That two prompts were needed for ChatGPT to come up with the inflection point approximation shows some of the problems with the current state of AI and the way we use it.
Its answers depend on uneven prompting input, and even when prompted relatively accurately, it is prone to changing its mind depending on new information we feed it.
The technology is the ultimate black box, despite attempts to develop transparency, oversight and so-called XAI, explainable AI. That effort is lagging light years behind not just the development of new models but also behind the adoption in crucial fields of human endeavour, such as warfare, as in Ukraine and also Gaza.
AI's inescapable intrusion
The cavalcade of recent announcements on the use of AI in business, such as Meta’s intention to make AI tools available to all advertisers on its platforms by the end of next year, evokes the now seemingly familiar spectre of disruption.
In the case of AI, though, we might underestimate the impact of a technology that not just makes old ways of doing things obsolete but has the potential to do the same to ourselves while being even more impenetrable than what we have become used to.
Only the beginning of a much larger transformation that is to come
The technological revolution of the last couple of decades, particularly brought on by the internet and its intrusion into almost all aspects of our lives, from dating to careers, is probably only the beginning of a much larger transformation that is to come.
While we might question some of its effects, the incessant drumbeat of technological development has made the ultimate penetration of AI into all of human society feel inescapable, and it probably is.
There’s no apt analogy. It’s part runaway train but with a very low probability of ever being stopped. And it’s partly like a cancer metastasising, as it will likely eventually push away or replace parts of actual organic intelligence activities.
More than extension
I’m speaking as a relatively early adopter of the internet, a while before the existence of browsers. Something of a computer nerd in the 1980s, I could code in a handful of programming languages and was fascinated by the emergence of this new technology that gave access to information and people that had been much harder to reach until then.
I also never thought it would catch on with the general public: Unix systems and Gopher protocols were not exactly user-friendly.
It’s fair to say that even after the emergence of the web in the early 1990s and the first browsers, Mosaic, Netscape and then Microsoft’s Internet Explorer, it took almost a decade for the internet to become less abstract to most people.
The impression of AI as only a tool is too limited and potentially unhelpful
In the case of AI, we’re already far beyond that point. While terms such as machine learning and Large Language Models were somewhat obscure until three years ago, the launch of ChatGPT changed that even more drastically than what the advent of browsers did for the internet.
The internet, of course, technologically paved the way for LLMs by providing the huge caches of data needed. And it also conceptually paved the way by making AI seem in many cases like only an extension of the worldwide web we already know, just another tool.
While that’s undoubtedly part of what AI is, a tool that helps us perform often humdrum tasks such as transcribing, collating and summarising, that impression is too limited and potentially unhelpful.
Existential issues
To some degree, the familiarity that we have with technological disruptions obscures what we’re about to live through. Take the threat to certain professions posed by AI. We’ve seen that before with the internet, is the general tenor.
Yes, the volume of letter mail, for example, was decimated, yet, hey presto, there was Amazon and countless other online outlets to pump up the volume of package deliveries.
The internet has probably, on balance, created more jobs than it has destroyed, except in some unlucky, yet crucial, sectors, such as journalism.
Many of the jobs that the internet actually created or augmented can be done by AI
The expectation that AI will follow the same route is much mistaken. That is to a not insignificant degree because many of the jobs that the internet actually created or augmented, such as marketing, customer support, even coding, can be done by AI.
Few people might shed a tear for the reduction in online advertising jobs as they’re being taken over by AI, but that would be a very shortsighted response.
AI is now embedded or winding its way through crucial fields such as health and health insurance, hiring, finance, law, education, culture, media and journalism.
And it’s not just, or even mainly, a difficult to quantify employment issue. Ethical, moral and even existential issues must take priority.
No one is fully in control
AI in warfare has been used, among others, for target selection. While in Ukraine’s case that was Russian bombers, in Israel’s case it was reportedly used against ‘low-level Hamas militants’.
Leaving decisions to machines inevitably lessens the human sense of responsibility. Even if a human has to sign off on or greenlight the eventual action. In Ukraine’s case, AI was said to have been used after drones lost contact, making such as final human check impossible.
In Ukraine’s case, AI was said to have been used after drones lost contact, making such as final human check impossible
But even in non-violent scenarios, are we trusting AI to hire and fire, decide who gets medical procedures reimbursed or receives benefits, or eventually who goes to jail?
Even XAI, now in its infancy, will not solve all these moral and ethical conundrums. If ever fully implemented, mostly experts are likely to be able to make sense of it, and probably still with the help of other AI tools.
For now, though, speed trumps safety in AI development, as large companies battle it out, and whole nations, China not in the last place, are racing to gain an advantage or at least not be left behind.
There are more reasons to worry: AI is being developed from the internet, not the best representation of humankind. It is developed mainly by young Western and East Asian men, with an over-representation of tech bros.
It is being developed in a hyper-capitalist moment, with little eye for social context and benefit. Value extraction is everything, equity almost doesn’t enter the equation.
Without being a luddite or a spreader of technological doom-thinking, one must wonder, can this end well?
I’ll leave the last words to ChatGPT. The LLM optimistically posits that at least we’re now talking about these hard questions.
Yet, that was preceded by a much more insightful observation about what’s most unsettling: “How many people assume that someone, somewhere must be in control — when often, no one fully is.”