Apprehension is brimming at the fore. It barges into dinner-table conversations, boardroom briefings, and late-night Slack threads: Will artificial intelligence take my job? For the past two years, Silicon Valley was the first to answer with an emphatic nod: yes, and faster than you think. They had almost rolled out the red carpet to hand the employee-of-the-year trophy to artificial intelligence. But wait for a moment. Let’s dig deeper, as all that shines is not gold, and sometimes it is the mirror that holds up the truth boldly.Salesforce, one of the world’s most influential enterprise software companies, seemed to embody the conviction that AI would replace workers. Until, suddenly, it didn’t. The story unfolding inside Salesforce is not a simple tale of automation triumphing over human labour. It is something far more instructive and unsettling. And in this gap between promise and performance lies the real lesson for workers, executives, and policymakers alike.There has always been a tug-of-war between those who believed AI would take employees’ seats in boardrooms and those who pedestalised human intellect. But recent observations somehow hint at an AI bubble that might burst sooner than we thought.
When confidence cracks at the top
A year ago, companies’ and employees’ belief in Large Language Models (LLMs) bordered on unshakable substance. All those days now feel nostalgic when writing a mail required us to challenge our cognitive abilities. Now all it requires is a perfectly structured prompt, and ta-daa, here is the best mail you could ever write. It is not only about writing mail; AI can summarise meetings, write codes, and make presentations in the blink of an eye. However, while it shimmers and glitters on the surface, digging deeper presents a murkier picture.The companies that actively laid off employees to substitute them with artificial intelligence are lamenting their decisions. And the present example is of Salesforce. Sanjna Parulekar, Senior Vice President of Product Marketing, admitted that internal trust in these models has declined sharply. The industry’s once-confident narrative, AI as an all-purpose cognitive worker, has begun to fray under real-world pressure. This shift matters because Salesforce is not a fringe player experimenting on the margins. It is the infrastructure behind customer relationships for thousands of global enterprises. When such a company publicly announces its AI ambitions, it signals something deeper.
The layoffs that sparked the fear
The anxiety, however, did not begin with technical caveats. It began with numbers. Salesforce trimmed off its support staff from roughly 9,000 to about 5,000 employees, a cut of nearly 4,000 roles. CEO Marc Benioff openly attributed the reduction to AI agents taking over work once done by humans. The statement travelled fast, hardening fears that white-collar work, once thought insulated, was now squarely in AI’s crosshairs.For many workers, the message seemed clear: AI did not need to be perfect to be disruptive. It only needed to be good enough.However, the moral lesson was not what the prelude of the story predicted.
When “smart” becomes unreliable
As AI agents were deployed at scale, cracks began to appear. Muralidhar Krishnaprasad, the Chief Technology Officer of Agentforce, acknowledged a striking limitation: give a large language model more than eight instructions, and it begins to drop some entirely. For consumer chat, this might be forgivable. For enterprise operations, where compliance, precision, and predictability are non-negotiable, it is a red flag.The consequences were not theoretical. Vivint, a home security company serving 2.5 million customers, found that AI agents tasked with sending satisfaction surveys simply failed to do so, without warning, explanation, or pattern. Salesforce eventually had to introduce deterministic triggers, rule-based automation that does exactly what it is told, every time, to restore reliability.In another case, executives described “AI drift,” where agents lose focus when users ask irrelevant questions. A chatbot designed to guide a customer through a form might suddenly follow a conversational detour, forgetting its primary task altogether.These are not minor bugs. They strike at the heart of whether AI can be trusted with responsibility.
The quiet return of boring technology
What Salesforce is now emphasising is telling. The company has begun to champion “deterministic” automation, systems that may be less glamorous and less conversational but far more dependable. In plain terms, Salesforce is rediscovering the value of boring technology: Software that behaves the same way every single time.This signals that AI-first messaging is staying at bay, least for now. Even Benioff, once among AI’s loudest champions, now says that strong data foundations, not AI models, sit at the top of Salesforce’s strategic priorities. The irony is difficult to miss. At the very moment when AI is credited with eliminating thousands of jobs, the company deploying it is pulling back from trusting it too much.
So, is AI taking jobs, or exposing organisational choices?
This is where the Salesforce story becomes more complicated and more honest. The layoffs were real. Jobs were lost. But the technology that replaced them is not yet the autonomous, infallible worker of popular imagination. Instead, it is a brittle system that requires guardrails, supervision, and, often, human correction.What disappeared at Salesforce was not work itself, but a particular configuration of work. AI agents absorbed repetitive, high-volume tasks. Humans were removed from roles designed around scale rather than judgment. Yet, when judgment, nuance, and accountability mattered, AI faltered.The uncomfortable truth is that companies may not be replacing humans because machines are superior, but because organisations are optimising for cost and tolerance for error. In some roles, “mostly right” is acceptable. In others, it is catastrophic.
The real question we should be asking
So, will AI take your job? The Salesforce story suggests a more precise question: What kind of work is your job built on? Tasks that are repetitive, rule-light, and error-tolerant are undeniably vulnerable. But roles that require context, prioritisation, and accountability remain stubbornly human.AI, for now, is not a worker. It is an amplifier, of efficiency, of mistakes, of organisational values. When deployed recklessly, it replaces people and breaks systems. When deployed carefully, it exposes how much judgment we once took for granted.Salesforce’s partial retreat is not an AI failure. It is a reality check. The future of work will not be decided by how fast machines improve alone, but by how honestly companies acknowledge what machines still cannot be trusted to do.And that, perhaps, is the most reassuring lesson of all.

