It seems that almost every week, the world of artificial intelligence (AI) continues to evolve, changing in ways that can be hard to follow. Even tech experts have highlighted that the speed at which companies are working can be concerning.

Since the launch of ChatGPT in 2022, many people have wondered just how far this technology can take us. For some, that has meant raising concerns regarding what may happen if AI is weaponized for the wrong reasons.

⏰Get expert insights and actionable trade alerts from veteran investing experts and hedge fund managers. Join TheStreet Pro today and get the first month FREE 🤑

Common fears regarding AI often centered around robots taking their jobs or weapons manufacturers using AI to create and deploy devices that could be used for immoral purposes in combat. But now it seems that a new AI fear is rising.

One AI expert recently made a unique but frightening analogy for a new use of AI that he finds troubling.    

Gary Marcus, an AI expert known for raising concerns about it, has published a controversial new take.

AI may be nearing its “Black Mirror” moment, warns scientist

If you’ve read any books or articles about AI in recent years, they could have easily been written by Gary Marcus. A noted psychologist, cognitive scientist and best-selling author, Marcus has been breaking down the dangers of AI for years.

His X bio sums up his experience quite well – “built two AI companies, wrote six books, tried to warn you about a lot of things.” And lately, Marcus hasn’t shied away from warning the masses about potential problems he sees regarding the rise of AI, sometimes using analogies that can make a reader shiver.

Related: Controversial author publishes shocking AI conversation with ChatGPT

Most recently, that came in the form of a think piece referencing a popular Netflix show titled “Black Mirror.” Known for its thought-provoking writing that explores techno-paranoia, the British program has garnered a significant following of late. Marcus just highlighted why it may be striking a chord with many viewers.

In a March 10 blog post published on his Substack page Marcus on AI, the author calls attention to a certain Black Mirror episode titled Nosedive. “The episode is set in a world where people can rate each other from one to five stars, using their smartphones, for every interaction they have, which can impact their socioeconomic status,” states a Wikipedia summary.

From Marcus’ perspective, this is eerily similar to how certain government agencies are deploying AI to scour the social media profiles of certain activists, primarily college students, for information that may lead to their visas being revoked. And as he makes clear, this is a key example of the powers of AI being harnessed for the wrong reasons.

Tech experts seem to share Marcus’ concern regarding this new use of AI. Miran Antamian, founder and CEO of visual library BookWatch, spoke to TheStreet, noting that he sees it as quite reasonable.

Elizabeth Holmes shocks the world with first interview from prisonOpenAI rival powers toward milestone with massive implicationsDating app company makes shocking change to enhance user safety

“The risks are real, an AI system could unintentionally botch and contextualize an individual’s posts in a manner that they get their visa revoked without necessary scrutiny,” he states. “If the government pursues this method, trust, privacy and freedom of expression will be severely undermined. There is no question that caution needs to be exercised while wielding these tools.”

AI tech is new, surveillance tech isn’t

AI expanding the capabilities of surveillance technology is far from new. Concerns that systems such as facial recognition software will be abused by law enforcement predates the current AI revolution, widely thought to have kicked off in 2022 with the rise of ChatGPT.

However, other experts note that long before AI existed, leaders were prone to abusing surveillance technology and using it for purposes that some consider inappropriate. Professor Carlos Gershenon-Garcia of SUNY Binghamton University notes that while AI is a highly sophisticated tool, the same action could have been taken without it.

Related: Experts raise red flags regarding new AI startup evaluation tool

“China has been using them successfully for several years: they installed cameras in many public places with advanced face recognition technology. So they know where you’ve been and with whom. They have a “citizen score”, and if yours is not too good, you do not have access to promotions, travels.”

The instance that Marcus references in his blog post isn’t the only recent example of AI being utilized similarly. Other experts have raised concerns about the so-called Department of Government Efficiency (DOGE) rolling out an AI chatbot to replace the work previously done by federal staffers.

The effectiveness of AI systems at either task remains to be seen, but more and more experts are expressing alarm at these developments. 

Related: Veteran fund manager unveils eye-popping S&P 500 forecast