Loading Now

India’s IT firms have a unique opportunity in AI’s trust deficit

India’s IT firms have a unique opportunity in AI’s trust deficit

India’s IT firms have a unique opportunity in AI’s trust deficit


This layer hinges on human oversight, transparency and explainability—precisely the ‘trust’ dimensions that could turn Generative AI from liability to a lucrative revenue stream for Indian providers.

Tier-1 software majors like TCS have woven GenAI into their workflows, emphasizing pilot deployments and internal automation over big-scale consulting mandates. Their strength lies in retraining developers on tools like GitHub Copilot and low-code platforms, automating boilerplate coding while retaining humans in the loop for critical paths. That ‘human in the loop’ ethos directly addresses one of the central concerns Shah identifies: ensuring systems remain aligned with human intentions. 

Also Read: Technobabble: We need a whole new vocabulary to keep up with the evolution of AI

Tier-2 providers such as LTI Mindtree lack TCS’s scale, but they shine in agility. Their typical positioning as productivity enhancers rather than code replacers allows them to layer trust-focused oversight atop GenAI output. Without this, many enterprise deployments will stall. By doing so, they offer faster proof of concept to clients anxious about AI accuracy and auditability.

When contrasted with non-Indian players like Accenture and IBM, a distinct divergence appears. Accenture has already booked billions of dollars in GenAI projects and IBM is realigning its global consulting structure around AI units. They are aggressively pushing end-to-end AI transformations—including automated code generation pipelines—with less apparent concern for incremental human mediation. But that appetite for scale means they must also invest heavily to close the AI trust deficit. 

For Indian firms, the trust deficit represents not just a compliance challenge, but a commercial opening. Trust in AI is not merely abstract ethical talk: it is about reliability, explainability and behaviour certification. Shah  writes that trust can be assessed “by looking at the relationship between the functionality of the technology and the intervals of human intervention in the process. That means that the less intervention, the greater the confidence.” 

Yet, in practice, enterprises often demand greater human oversight for sensitive use cases. For Indian providers, whose business model runs on cost-effective human resources, enabling that oversight at scale can be a strategic differentiator.

Also Read: Productivity puzzle: Solow’s paradox has come to haunt AI adoption

They invested heavily in the past in automation for IT infrastructure and business process operations. Their automation playbooks now form the backbone of GenAI’s enterprise strategies. Firms often train developers in prompt engineering and validation alongside generative code output. Human reviewers validate, correct and certify code before deployment, creating an audit trail. This aligns with the thesis that to build trust, you must create human-mediated checkpoints that govern AI behaviour.

Relationships with hyperscalers remain robust: Tier-1 providers co-engineer GenAI offerings with Azure, AWS and Google Cloud, hosting models on hyperscaler infrastructure rather than building vast data centres. Tier-2 firms integrate with hyperscalers or Indian startup cloud platforms. In contexts where sovereignty and residency matter, Indian providers partner with startups to offer managed GenAI tools within India. Domestic hosting also helps build trust, particularly with regulators.

Also Read: AI didn’t take the job. It changed what the job is.

Indian firms collaborate with niche startup AI vendors for explainability tools, code‑lineage trackers and behaviour‑audit platforms. They are building or buying tooling to surface provenance, metrics and error‑diagnosis alongside code generation modules. In contrast, non-Indian service providers tend to sell large-scale generative code deployments as transformational consulting journeys. Indian firms can undercut on price while building trust layer offerings that rely on domestic teams and documentation.

The trust deficit thus could become a money-spinner for Indian IT services. As organizations grapple with AI bias, hallucinations and a lack of transparency, demand will grow for human-mediated code generation services. Human reviewers need to monitor, validate and correct AI-generated code. The ‘human in the loop’ thus becomes not only a safety net, but a commercial lever.

Also Read: Colleagues or overlords? The debate over AI bots has been raging but needn’t

However, one size does not fit all. Tier-1 Indian players should continue embedding trust‑layer capabilities into their GenAI practice by building specialized AI governance units, collaborating with domestic ‘explainability’ startups, and quantifying trust-related billing models. Tier-2 firms should double down on managed code‑agent offerings, with built-in human review workflows, transparency dashboards and prompt governance.

For global giants like Accenture and IBM, offering tiered pricing on trust-enhanced deployments and adapting consulting models to regional cost structures may help. Across the board, the most viable strategy is a hybrid model that combines GenAI productivity gains with layered human oversight, clear provenance, explainability tooling and risk control. The trust deficit is not just a challenge; it is fast becoming a strategic opening—one that Indian providers are uniquely equipped to monetize.

The author is co-founder of Siana Capital, a venture fund manager.

Post Comment