Good morning, happy Friday, and welcome to Tech News Now, TheStreet’s daily tech rundown. 

It has been a litigious week, which is not a sentence I think I have ever said before. A bunch of new lawsuits were filed against OpenAI, two from news organizations alleging yet more copyright infringement, and one a class action from a private citizen alleging massive violations of privacy law, among other things. 

In today’s edition, we’re covering yet another lawsuit filed against OpenAI, this time from billionaire tech magnate Elon Musk. We’re also covering the weak opening of Musk’s lawsuit against the Center for Countering Digital Hate and Meta’s decision to pull out of a news deal in Australia. 

Tickers we’re watching today:  (META)  and  (MSFT)

Don’t Miss: Yesterday, we published a breakdown of all the copyright cases currently on OpenAI’s docket

We also published a breakdown of a new brand of AI-enhanced fraud: tax fraud

Let’s get into it. 

Related: AI tax fraud: Why it’s so dangerous and how to protect yourself from it

Meta is killing Facebook News

Facebook in 2019 launched the Facebook News tab, simultaneously inking a number of three-year deals with news publishers worth around $105 million in total

According to a new blog post, Facebook News is about to breathe its last. Meta said Feb. 29 that it will “deprecate Facebook news” in early April in the U.S. and Australia and will not enter into new commercial deals with news publishers. It has already done so in the U.K., France and Germany. 

“This is part of an ongoing effort to better align our investments to our products and services people value the most,” Meta said in a statement. “As a company, we have to focus our time and resources on things people tell us they want to see more of on the platform, including short-form video.”

According to Meta, news made up around 3% of what people see in their Facebook feed in 2023. 

As expected. Showdown time. Facebook and Instagram will presumably block sharing news as in Canada and dare the govt and/or be designated by the govt. Facebook will lose, they can’t keep exiting. It’s bad for their experience and their audience. Make sure to get their A/B splits. https://t.co/71ZnAlK1mt

— Jason Kint (@jason_kint) March 1, 2024

The move will pit Meta against Australia and the country’s 2021 News Media and Digital Platforms Mandatory Bargaining Code, a law designed to require large digital platforms to pay for Australian news content. Meta said that, even without a dedicated feed, news content will still be viewable on the platform. 

“The idea that one company can profit from others’ investment, not just investment in capital but investment in people, investment in journalism, is unfair,” Prime Minister Anthony Albanese told reporters. “That’s not the Australian way.”

Related: Here are all the copyright lawsuits against ChatGPT-maker OpenAI

X has a bad day in court

Last year, Elon Musk’s X filed a lawsuit against the nonprofit The Center for Countering Digital Hate (CCDH), accusing the nonprofit of violating X’s terms of service, engaging in illegal hacking and scaring advertisers away from the site. 

Senior District Judge Charles Breyer heard the case in a San Francisco federal court on Thursday, and he didn’t seem to totally buy into X’s arguments. 

“You could have brought a defamation case; you didn’t bring a defamation case,” Breyer told X’s attorney. “And that’s significant.”

Breyer said that in order for X to “collect one dime” of the millions in damages the company is seeking, X would have to prove that the CCDH knew X’s terms of service were going to change to allow “neo-Nazi, white supremacist, misogynist and spreaders of dangerous conspiracy theories” back on the site before that change occurred. 

“I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is,” Breyer said. 

X’s attorney argued that users agree to changes in terms of service by continuing to use the platform, something that Breyer called “one of the most vapid extensions of the law that I’ve ever heard.”

“‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s foreseeable.’ I mean, that argument is truly, is truly remarkable,” he said. 

Related: Deepfake program shows scary and destructive side of AI technology

Elon Musk sues OpenAI; the billionaire wants his money back

Here we go. The big one. 

Way back in 2015 (that was almost 10 years ago, by the way), Musk partnered up with Sam Altman and a few other Silicon Valley folks to launch a non-profit artificial intelligence research lab called OpenAI. 

The lab’s mission, focused on openness and transparency, was to develop artificial general intelligence (AGI) to benefit humanity, rather than to maximize shareholder profits. It was designed to be a transparent counterweight to Google. Since that noble beginning, Musk and his millions have left the company, which is helmed now by Altman, and OpenAI’s charter seems to have shifted somewhat. 

The company is now a hybridized nonprofit, capped for-profit mix. And in Musk’s absence, OpenAI — in addition to commercializing its products, obscuring transparency efforts and closing its technology — turned to Microsoft for funding, receiving a $13 billion investment

In a lawsuit filed Feb. 29, Musk has accused OpenAI of breach of contract, breach of fiduciary duty and unfair business practices, among other things. He is seeking a legal requirement forcing OpenAI to return to its original foundational charter, namely in making AI research available to the public. 

Oh, and he’s also seeking the restitution of all the money he poured into OpenAI while they were engaged in the “unfair” practices described in the suit, in addition to general, compensatory and punitive damages. 

More deep dives on AI:

Think tank director warns of the danger around ‘non-democratic tech leaders deciding the future’ George Carlin resurrected – without permission – by self-described ‘comedy AI’Artificial Intelligence is a sustainability nightmare — but it doesn’t have to be

“OpenAI has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” the suit claims. “Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity.”

The suit additionally argues that OpenAI’s GPT-4, which powers its premier version of ChatGPT, is “capable of reasoning,” something that researchers do not agree with (and something that is difficult to research, given OpenAI’s lack of transparency). 

A paper posted online by AI researcher Melanie Mitchell earlier in February found that “GPT models are still lacking the kind of abstract reasoning needed for human-like fluid intelligence.”

A separate paper published in August, that was not yet peer-reviewed, argues that GPT-4 “can’t reason.” 

AI researcher Gary Marcus has called GPT-4 “one more giant step for hype, but not necessarily a giant step for science, AGI, or humanity.”

“There are no scientific publications describing the design of GPT-4,” the suit says, echoing Marcus’ earlier writing. “Instead, there are just press releases bragging about performance. On information and belief, this secrecy is primarily driven by commercial considerations, not safety.”

The suit says that Musk contributed a total of $44 million to OpenAI between 2016 and 2020. 

“But where some like Mr. Musk see an existential threat in AGI, others see AGI as a source of profit and power,” the suit says. 

Musk, meanwhile, is doing plenty of his own work in AI, from self-driving cars to Tesla’s Optimus robot, to Grok, a large language model designed to compete with ChatGPT. Indeed, many analysts and investors view AI as an integral part of Tesla’s business

Musk said in August: “I think we may have figured out some aspects of AGI. The car has a mind. Not an enormous mind, but a mind nonetheless.”

Many researchers, however, do not believe AGI is possible and have further said that such hype around the potential of AGI is a ploy for power. 

OpenAI did not respond to a request for comment. 

Related: The ethics of artificial intelligence: A path toward responsible AI

The AI Corner: The road to AGI

This feels fitting. 

To close out this Friday’s rundown, Marcus recently gave a talk called ‘No AGI without Neurosymbolic AI.‘ 

“Intelligence is multi-faceted and we shouldn’t expect any one-size-fits-all solution here. We’re just not going to solve all of this in the next few years,” he said. 

Marcus has been arguing for a while that the industry needs a “paradigm shift” away from large language models in order to achieve something resembling AGI.

“They’ll give you something that looks like reasoning. They’ll be correct 75% of the time, but that’s not what reasoning is,” he said. “We don’t know how to make basic ethical principles, we’re still struggling with bias. We still have a long way to go.”

If you want the summary, go to minute 28 and listen to Marcus break down his final slide. 

Contact Ian with tips and AI stories via email, [email protected], or Signal 732-804-1223.

Related: Here’s the Startup That Could Win Bill Gates’ AI race