As artificial intelligence (AI) continues to transform entire industries, many people are searching for the answer to a pressing question: Will this new technology end up taking my job?

For many people, there doesn’t seem to be a clear answer, at least not yet. Retail stores have been replacing cashiers with self-service machines for years, and offices have begun to swap customer service personnel with AI agents. But other industries have been slower to shift toward an AI-centric business model.

💵💰Don’t miss the move: Subscribe to TheStreet’s free daily newsletter 💰💵

Until recently, that is. In the past few months, tech companies have started laying off workers and doubling down on implementing AI into their operations, giving what many people consider to be a glimpse into the future of work.

One AI expert doesn’t share this perspective, though. He recently made a shocking argument on generative AI, attempting to dispel what he seems to see as a popular myth.

Tech founder and author Gary Marcus has a surprising take on how AI will impact the workforce. 

Image source: Horacio Villalobos/Getty Images

Tech leader sounds off on the future of AI in the workforce

As AI systems have quickly evolved, there has been little doubt that the technology will play an increasingly important role in the modern workforce. One question that experts continue to debate is just how well AI can do certain jobs and when, if ever, it will be able to do them better than humans can.

Related: Leading cognitive scientist is sounding the alarm on new AI use

Noted AI researcher and cognitive scientist Gary Marcus recently weighed in on this topic, offering an opinion that most people likely didn’t see coming. Many tech leaders, including Bill Gates, have speculated recently that in the coming years, even highly skilled workforce members, such as doctors, will be phased out by AI.

But as Marcus sees it, these doomsday predictions regarding AI taking people’s jobs may be overblown. On May 7, he responded to an X post touting an AI model that is allegedly smarter than 85% of humans, predicting it would replace almost all humans by 2026.

Marcus didn’t hold back, making it extremely clear that he disagreed with the argument.

“Every business in the world has discovered in the last several months that GenAI is not in fact smart enough to replace most of their employees,” he stated. “Whatever you are reading from these (often gamed, sometimes contaminated) benchmarks does not reflect real-world reality.”

Other tech experts chimed in, some agreeing with Marcus’ statement. Peter Voss, founder of AI startup Aigo.ai, responded, claiming that “Not a single LLM is anywhere near *autonomously* learning to do a simple customer support job as well as a human.” He added that this could be demonstrated through a simple test.

While Marcus does not offer any examples of companies that have reached this conclusion regarding generative AI, some recruitment professionals have made similar arguments.

In March 2025, Nickle LaMoreaux, chief human resources officer at IBM, stated that while AI would continue to change how offices operate, he didn’t believe it could fully replace human workers.

More AI News:

Microsoft introduces terrifying new AI tool, angers usersMark Cuban makes bold statement on AI and your jobA major change is coming to ChatGPT that users will hate

“LaMoreaux expects AI tools will handle some of the more rudimentary work, but they can’t handle everything. They will make employees more productive by cutting down on lower-level work, but humans will still be needed to handle high-level decision-making work,” CNET reports.

Not all tech experts believe AI will make humans obsolete

One thing many tech experts seem to agree on is that AI will change how people do their jobs, in many cases making them easier and improving work quality. However, they also believe AI will reduce — but not eliminate — the need for humans in the workplace.

Related: Scary new AI trend should terrify job seekers

Other industry leaders such as Mark Cuban have argued that human oversight will always be needed to ensure that AI systems are accomplishing their tasks correctly and providing the right information, something they don’t do consistently.

Marcus has also expressed concern about the trend of large language models (LLMs) generating what are called hallucinations, or incorrect statements that they confidently present as fact. In a recent blog post, he examined the phenomenon and offered insight into the problems it can pose.

“LLMs mimic the rough structure of human language, but 8 years and roughly half a trillion dollars after their introduction, they continue to lack a grasp of a reality,” he stated. 

That may also help explain why these AI tools are still not smart enough to replace human workers.

Related: Veteran fund manager unveils eye-popping S&P 500 forecast