Since the launch of ChatGPT in November 2022, artificial intelligence (AI) models have transformed the tech sector, disrupting entire industries and changing the ways in which many tasks are performed.

💰💸 Don’t miss the move: SIGN UP for TheStreet’s FREE Daily newsletter 💰💸

Almost every major tech company has doubled down on creating its own AI agent since then, as it has become increasingly clear that these large language models (LLMs) are ushering in the next frontier of technology.

Magnificent 7 leaders such as Google  (GOOGL)  and Microsoft  (MSFT)  have produced highly functional models, while fast-growing startup Anthropic’s Claude AI has won significant praise from members of the tech community.

However, another tech company recently rolled out an AI agent, one that is reportedly able to outperform many of its peers. The fact that the firm in question is currently under intense legal scrutiny raises some questions about its AI model and whether users can or should trust it.

A powerful new AI model is being rolled out but it may have too much power.

gorodenkoff/Getty Images

A new AI model is producing more questions than answers

Through the first month of 2025, many people have been hyper-focused on social media platform TikTok, specifically if it will remain available to U.S. users. The app, owned by Chinese company ByteDance, has been caught in regulatory crossfires due to national security and user data privacy concerns.

Related: Microsoft announcement lays out urgent advice for tech companies

As it turns out, though, ByteDance has been busy with projects outside of TikTok, including building its own AI model. The company recently introduced UI-TARS, its new AI agent that can understand graphical user interfaces (GUIs) and apply reasoning to tasks. Per VentureBeat:

“Trained on roughly 50B tokens and offered in 7B and 72B parameter versions, the PC/MacOS agents achieves state-of-the-art (SOTA) performance on 10-plus GUI benchmarks across performance, perception, grounding and overall agent capabilities, consistently beating out OpenAI’s GPT-4o, Claude and Google’s Gemini.”

The outlet highlights the advanced nature of the new ByteDance AI model, stating that it can “perform complex workflows” by taking control of the computers on which it is run. However, given the problems facing the platform that made ByteDance famous, questions abound regarding how it will be used.

Can users trust the AI model created by the company behind TikTok? If so, is it advisable to do so?

According to some experts, the answer is no. “I would be very concerned about sending personal data to ByteDance, as it could potentially be accessed by the Chinese government,” states Nathan Brunner, CEO of boterview.

Brunner highlights further areas of concern regarding ByteDance AI models, specifically that they are not open source and lack transparency. He also notes that the limitations and biases of these models are likely not well understood.

Internal messages from Meta leaders reveal fierce AI rivalryTikTok is back, but users sound alarm on a startling changeNew grassroots group wants to save social media from billionaires

Deborah Perry Piscione, co-founder and CEO of Work3 Institute, also advises users against trusting the new AI model, stating, “The institutional framework of ByteDance’s AI combined with their historically opaque approach to data governance, raises legitimate concerns.”

Both she and Brunner highlight alleged ties to China’s government as concerning elements that people should consider before using UI-TARS, despite its AI workflow success.

How to handle the ByteDance model problem

Brunner and Piscione aren’t the only AI experts who advise skepticism. Lisa Martin, a Research Director at The Futurum Group, also sees potential problems stemming from the lack of transparency around the UI-TARS, specifically how these AI algorithms work.

Related: MrBeast makes bombshell TikTok announcement

“We don’t know if they comply with the global norms of “ethical AI,” nor do we know if ByteDance is sharing American user data with the Chinese government (big red flags),” she states. “ByteDance hasn’t been transparent with that, and that is a valid concern and a strong reason to not trust ByteDance AI.”

Martin notes that Oracle  (ORCL)  stores U.S. TikTok user data on its U.S.-based servers, which she sees as a reason the model may be trustworthy. She also raises the question, though, of whether it requires ByteDance “to be more transparent with the American government about how American TikTok data is used in China.”

To her knowledge, the answer is no. If that is correct, it could certainly be concerning for potential users who may have trouble trusting the new ByteDance AI model.

Despite these concerns, Martin also notes that she sees a potential path forward for U.S. users to use the ByteDance AI agent safely.

If a Suppose a buyer, such as MrBeast or Kevin O’Leary, were to purchase the 50% stake in TikTok that President Donald Trump has mandated, it might allow the U.S. to “impose regulations on ByteDance for transparent data handling and require independent audits of its AI systems and data sharing practices.”

Related: Veteran fund manager issues dire S&P 500 warning for 2025