We need a whole new vocabulary to keep up with the evolution of AI
Technology now sprints past our words. As machines get smarter, our language lags. Buzzwords, recycled slogans and podcast quips fill the air but clarify nothing. This isn’t just messy, it’s dangerous. Investors chase vague terms, policymakers regulate without definitions and the public confuses breakthroughs with sci-fi.
Also Read: An AI gadget mightier than the sword?
We’re in a tech revolution with a vocabulary stuck in the dial-up days. We face a generational shift in technology without a stable vocabulary to navigate it.
This language gap is not a side issue. It is a core challenge that requires a new discipline: a fierce scepticism of hype and a deep commitment to the details. The instinct to simplify is a trap. Once, a few minutes was enough to explain breakthrough apps like Google or Uber. Now, innovations in robotics or custom silicon resist such compression. Understanding OpenAI’s strategy or Nvidia’s product stack requires time, not sound-bites.
We must treat superficial simplicity as a warning sign. Hot areas like AI ‘agents’ or ‘reasoning layers’ lack shared standards or benchmarks. Everyone wants to sell a ‘reasoning model,’ but no one agrees on what that means or how to measure it. Most corporate announcements are too polished to interrogate and their press releases are not proof of defensible innovation. Extraordinary claims need demos, user numbers and real-world metrics. When the answers are fuzzy, the claim is unproven. In today’s landscape, scepticism is not cynicism. It is discipline.
This means we must get comfortable with complexity. Rather than glossing over acronyms, we must dig in. Modern tech is layered with convenient abstractions that make understanding easier, but often too easy. A robo-taxi marketed as ‘full self-driving’ or a model labelled ‘serverless’ demands that we look beneath the surface.
Also Read: Productivity puzzle: Solow’s paradox has come to haunt AI adoption
We don’t need to reinvent every wheel, but a good slogan should never be an excuse for missing what is critical. The only way to understand some tools is to use them. A new AI research assistant, for instance, only feels distinct after you use it, not when you read a review of what it can or cannot accomplish.
In this environment, looking to the past or gazing towards the distant future is a fool’s errand. History proves everything and nothing. You can cherry-pick the dot-com bust or the advent of electricity to support any view. It’s better to study what just happened than to force-fit it into a chart of inevitability.
The experience of the past two years has shattered most comfortable assumptions about AI, compute and software design. The infographics about AI diffusion or compute intensity that go viral on the internet often come from people who study history more than they study the present. It’s easier to quote a business guru than to parse a new AI framework, but we must do the hard thing: analyse present developments with an open mind even when the vocabulary doesn’t yet exist.
Also Read: Colleagues or overlords? The debate over AI bots has been raging but needn’t
The new ‘Nostradami’ of artificial intelligence: This brings us to the new cottage industry of AI soothsaying. Over the past two years, a fresh crop of ‘laws’ has strutted across conference stages and op-eds, each presented as the long-awaited Rosetta Stone of AI. We’re told to obey Scaling Law (just add more data), respect Chinchilla Law (actually, add exactly 20 times more tokens) and reflect on the reanimated Solow Paradox (productivity still yawns, therefore chatbots are overrated).
When forecasts miss the mark, pundits invoke Goodhart’s Law (metrics have stopped mattering) or Amara’s Law (overhype now, under-hype later). The Bitter Lesson tells us to buy GPUs (graphic processing units), not PhDs. Cunningham’s Law says wrong answers attract better ones.
Our favourite was when the Victorian-era Jevons’ Paradox was invoked to argue that a recent breakthrough wouldn’t collapse GPU demand. We’re not immune to this temptation and have our own Super-Moore Law; it has yet to go viral.
Also Read: AI as infrastructure: India must develop the right tech
These laws and catchphrases obscure more than they reveal. The ‘AI’ of today bears little resemblance to what the phrase meant in the 1950s or even late 2022.
The term “transformer,” the architecture that kicked off the modern AI boom, is a prime example. Its original 2017 equation exists now only in outline. The working internals of today’s models—with flash attention, rotary embeddings and mixture-of-experts gating—have reshaped the original methods so thoroughly that the resulting equations resemble the original less than general relativity resembles Newton’s laws.
This linguistic mismatch will only worsen as robotics grafts cognition onto actuators and genomics borrows AI architecture for DNA editing. Our vocabulary, built for a slower era, struggles to keep up.
Also Read: Rahul Matthan: AI models aren’t copycats but learners just like us
Beneath the noise, a paradox remains: staying genuinely current is both exceedingly difficult and easier than ever. It’s difficult because terminology changes weekly and breakthroughs appear on preprint servers, not in peer-reviewed journals.
However, it’s easier because we now have AI tools that can process vast amounts of information, summarize dense research and identify core insights with remarkable precision. Used well, these technologies can become the most effective way to understand technology itself. And that’s how sensible investment in innovation begins: with a genuine grasp of what’s being invested in.
The author is a Singapore-based innovation investor for GenInnov Pte Ltd
Post Comment