Stay ahead of the curve in the age of AI: Be more human
This would make OpenAI and other frontier AI labs fret, but more significantly, it will cause heartburn among humans as we wonder whether AI is already superior to us.
Also Read: Businesses should be clear about what they’re deploying AI for
Parmy Olson recently wrote about a Microsoft-CMU study of 319 knowledge workers on how they worked with AI. A startling result was that as they started trusting AI with skills such as writing, analysis and evaluation, they practised those skills less themselves: “They self-reported an atrophy of those skills.” It led to their accepting whatever output GenAI gave them, with minimal or no checking.
AI awakens in us a primal fear of losing our jobs, livelihood and purpose. This is not a new feeling. Even Socrates worried that writing “will lead to the erosion of memory.” Calculators were expected to destroy our arithmetic skills. And computers, we feared, would make millions of knowledge workers obsolete.
Even so, frontier AI labs are racing to achieve AGI, or artificial general intelligence, when AI will be as or more intelligent than most human beings. Futurist Ray Kurzweil predicted that this singularity would occur in 2045. Generative AI and ChatGPT startled him and others enough to advance that date to 2026, or anytime within the next five years.
Also Read: Arun Maira: Don’t let techo-optimism over AI crowd out concerns of equity
My view on singularity is that we do not need to wait until 2045, or even 2026. AI has already started taking over our world, albeit in a gradual way. It is not that you will wake up one morning in January 2029 and find that machines have taken over. The way it will happen is how it happened to the mythical frog in a vessel of slowly boiling water. The water heated so gradually that the frog was lulled into warm comfort and never realized it had started cooking.
So, what is this slowly boiling water of AI around us? Fifteen years back, we remembered everyone’s phone number. Now, our phones remember them for us. We used to remember how to get from Place A to B. Now, we do not, as Google Maps takes us there. At 4pm every alternate day, my phone pops up directions that push me to leave for physiotherapy, and I do. Whenever autonomous cars become a reality, my phone will summon it half an hour before and take me there, as I gradually lose the ability to remember and drive.
We dream of intelligent homes that are powered by AI sensors, where a mixer makes my protein shake on its own and the AC self-activates while my GPT-powered microwave and fridge talk to each other to prepare my meal—you get the picture of this utopia.
When this happens, will we humans self-atrophy, as the survey respondents claimed? What can we humans do as AI and its agents get stronger, autonomous and super intelligent?
Also Read: Artificial intelligence is evolving just as our minds do while growing up
One, we will need to ‘contain’ AI so that it is safer, beneficial and does the things we want it to. Mustafa Suleyman, Microsoft’s AI chief, talks about this in his book, where he says that something as innocuous as traffic can be a human killer if it is not constrained by rules. We created traffic signals and driving rules to ensure that our streets serve us well.
Two, we will have to become AI literate. The technology is here and we need to deal with it. For that, it is important for humans to absorb GenAI tools into our lives, as we did with English or arithmetic. We will need to work with AI tools and agents as co-workers, and so we must understand and learn how to work with them effectively, safely and naturally. The definition of literacy will expand from reading, writing and arithmetic to working with GenAI tools and agents as well.
Three, above all, we will need to rediscover the human skills that are innate to us: curiosity, compassion, use of language, logic, instinctiveness, collaboration and so on. We will need to pick the right AI and agents for our work and home, ask the right questions in the right way, and use our judgement and instinct to assess whether the answers or actions work for us; if not, we will need another iteration.
Also Read: Nitin Pai: India can shape how artificial intelligence diffuses through society
We will need to use our collaboration skills to work with other humans and agents, our compassion to decide if something is wrong, and our curiosity to see if things can be done in a better way.
In other words, as artificial intelligence becomes more powerful, we will need to become more human.
The author is a founder of AI&Beyond and the author of ‘The Tech Whisperer’.
Post Comment