This story was initially published as a part of Tech News Now, TheStreet’s daily tech rundown. 

Back in 2015, Elon Musk — afraid of the existential threat of artificial intelligence — partnered up with Sam Altman and a few other Silicon Valley folks to launch a non-profit AI research lab called OpenAI.

The lab’s mission, focused on openness and transparency, was to develop artificial general intelligence (AGI) to benefit humanity, rather than to maximize shareholder profits. It was designed to be a transparent counterweight to Google. Since that noble beginning, Musk and his millions have left the company, which is helmed now by Altman, and OpenAI’s charter seems to have shifted somewhat.

The company is now a hybridized nonprofit, capped for-profit mix. And in Musk’s absence, OpenAI — in addition to commercializing its products, obscuring transparency efforts and closing its technology — turned to Microsoft for funding, receiving a $13 billion investment.

Related: Tesla Chief Musk is pleased with the result of OpenAI’s ‘Game of Thrones’

In a lawsuit filed Feb. 29, Musk has accused OpenAI of breach of contract, breach of fiduciary duty and unfair business practices, among other things. He is seeking a legal requirement forcing OpenAI to return to its original foundational charter, namely in transparency and making AI research available to the public.

Musk is also seeking the restitution of all the money he poured into OpenAI while they were engaged in the “unfair” practices described in the suit, in addition to general, compensatory and punitive damages.

More deep dives on AI:

Think tank director warns of the danger around ‘non-democratic tech leaders deciding the future’George Carlin resurrected – without permission – by self-described ‘comedy AI’Artificial Intelligence is a sustainability nightmare — but it doesn’t have to be

“OpenAI has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” the suit claims. “Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity.”

AI researcher Gary Marcus said in a post that he agrees with Musk’s overview of the facts: “By any reasonable metric, OpenAI as a whole no longer works according to the mission that was originally set out. Elon did not get what he paid and worked for.”

“I won’t predict who will win, but I will say that Elon has a point. And anyone making deals with Altman would be wise to read it,” Marcus said. “The company Sam and Greg built has little to do with what was originally promised.”

The suit also points out the boardroom drama that came out of OpenAI last year, when Altman was fired and then reinstated over the course of a weekend. The suit says that “on information and belief, Altman’s firing was due in part to OpenAI’s breakthrough in realizing AGI,” adding that Altman’s reinstatement was the result of Microsoft’s “coercive power” over OpenAI and its board. 

“With the reinstatement of Mr. Altman and the restructuring of the Board, OpenAI’s corporate structure that had been designed as a system of checks and balances between the nonprofit arm, for-profit arm, the Board, and the CEO to ensure the non-profit mission was being carried out, collapsed overnight,” the suit says. 

Musk at the time expressed intense concern over the boardroom shuffle taking place at his old startup. 

OpenAI did not respond to a request for comment. 

Related: AI tax fraud: Why it’s so dangerous and how to protect yourself from it

The AGI of it all

The suit additionally makes the argument that OpenAI’s GPT-4, which powers its premier version of ChatGPT, is “capable of reasoning,” something that researchers do not agree with (and something that is difficult to research, given OpenAI’s lack of transparency).

A paper posted online by AI researcher Melanie Mitchell earlier in February found that “GPT models are still lacking the kind of abstract reasoning needed for human-like fluid intelligence.”

A separate paper published in August, that was not yet peer-reviewed, argues that GPT-4 “can’t reason.”

Marcus has referred to GPT-4 as “one more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.”

“There are no scientific publications describing the design of GPT-4,” the suit says, echoing Marcus’ earlier writing. “Instead, there are just press releases bragging about performance. On information and belief, this secrecy is primarily driven by commercial considerations, not safety.”

The complaint says that Musk contributed a total of $44 million to OpenAI between 2016 and 2020. The company is now valued at around $86 billion

“But where some like Mr. Musk see an existential threat in AGI, others see AGI as a source of profit and power,” the suit says.

Related: Experts Explain the Issues With Elon Musk’s AI Safety Plan

Musk, AI and business 

Musk, meanwhile, is doing plenty of his own work in AI, from self-driving cars to Tesla’s Optimus robot, to Grok, a large language model designed to compete with ChatGPT. Indeed, many analysts and investors view AI as an integral part of Tesla’s business.

Six months after signing a letter calling for the pause in the development of advanced AI systems, Musk launched xAI — which later launched Grok — saying at the time that his vision for safe AI involves “growing” a “curious and truth-seeking” AI model. 

Two experts TheStreet spoke with at the time noted their skepticism around this approach. 

“I don’t think that attributing human attributes to AI models is a good idea, or accurate in any way. Models can’t be curious because they’re not sentient,” AI expert and researcher Dr. Sasha Luccioni told The Street in July. 

Musk has also hinted that Tesla, through its self-driving efforts, has “figured out some aspects of AGI,” saying in August: “The car has a mind. Not an enormous mind, but a mind nonetheless.”

Many researchers, however, have questioned whether AGI is possible at all, with some claiming that such hype around the dangerous or benevolently world-changing potential of AGI is a little more than a ploy for power.

Contact Ian with tips and AI stories via email, [email protected], or Signal 732-804-1223.

Related: Meet Sam Altman, the man behind OpenAI’s revolutionary ChatGPT