Hang on… Did Microsoft just admit that AI could dumb us down?
More likely, it wants to keep ahead of the curve as AI disrupts certain jobs, and ensure that its tools remain useful to businesses. At a time when Big Tech is racing to make AI models bigger, that’s a good approach to the industry’s business model and its social outcomes.
The study, carried out with researchers at Carnegie Mellon University, surveyed 319 knowledge workers on how they used AI, including a teacher generating images for a presentation about hand-washing to her students using DALL-E 2 and a commodities trader generating strategies using ChatGPT.
Also Read: DeepSeek’s RI may be the first of many AI super-apps to come
The researchers found a striking pattern: The more participants trusted AI for certain tasks, the less they practised those skills, such as writing, analysis and critical evaluations. As a result, they self-reported an atrophy of those skills. Several said they started to doubt their abilities to perform tasks such as verifying grammar in text or composing legal letters, which led them to accept whatever GenAI gave them.
And they were even less likely to practise their skills when there was time pressure. “In sales, I must reach a certain quota daily or risk losing my job,” one respondent said. “Ergo, I use AI to save time and don’t have much room to ponder over the result.”
A similar study by Anthropic, which looked at how people were using its AI model Claude, found that the top skill exhibited by the chatbot in conversations was “critical thinking.”
This paints the picture of a future where professional workers become managers of AI’s output, rather than originators of ideas and content. OpenAI’s lDeep Research model, which costs $200 a month, can conduct research across the internet, scouring images, PDFs and text, to produce detailed reports with citations.
One result is that cognitive work is going to transform, according to a 12 February note to investors from Deutsche Bank. “Humans will be rewarded for asking their AI agent the right questions, in the right way, and then using their judgment to assess and iterate on the answers,” research analyst Adrian Cox writes. “Much of the rest of the cognitive process will be offloaded.”
Also Read: Can AI chatbots be manipulated? A new industry promises just that.
As frightening as that sounds, consider that Socrates once worried that writing would lead to the erosion of memory, that calculators were once expected to kill our math skills and that GPS navigation would leave us hopelessly lost without our phones. That last one might be somewhat true, but by and large humans have found other uses for their brains when they outsource their thinking, even if our math and navigating skills become lazier.
What makes AI different?
It encroaches on a much broader part of our cognition. We’re put in positions to think critically far more often than we are to calculate sums or chart routes—whether crafting a sensitive email or deciding what to flag to our boss in a report. That could leave us less able to do core professional work, or more vulnerable to propaganda. And it leads back to the question of why Microsoft—which makes money from sales of OpenAI’s GPT models—published these findings.
There’s a clue in the report itself, where the authors note that they risk creating products “that do not address workers’ real needs,” if they don’t know how knowledge workers use AI, and how their brains work when they do. If a sales manager’s thinking skills go downhill by using Microsoft’s AI products, the quality of their work might decline too.
Also Read: Parmy Olson: The DeepSeek AI revolution has a security problem
A fascinating finding in Microsoft’s study was that the more people were confident in the abilities of their AI tool, the less likely they were to double-check its output. Given that AI still has a tendency to hallucinate, that raises the risk of poor-quality work. What happens when employers start noticing a decline in performance? They might blame it on the worker—but they might also blame it on the AI, which would be bad for Microsoft.
Tech companies have loudly marketed AI as a tool that will ‘augment’ our intelligence, not replace it, as this study seems to suggest. So the lesson for Microsoft is in how it aims future products, not in making them more powerful, but in somehow designing them to enhance rather than erode human capabilities.
Perhaps, for instance, ChatGPT and its ilk can prod its users to come up with their own original thoughts once in a while. If they do not, businesses could end up with workforces that can do more with less, but are also left clueless if their newfound efficiency is sending them in the wrong directions. ©Bloomberg
Post Comment