BSV
$68.9
Vol 218.32m
-0.73%
BTC
$98538
Vol 122898.6m
4.39%
BCH
$487.16
Vol 2236.46m
9.51%
LTC
$89.7
Vol 1415.78m
7.1%
DOGE
$0.38
Vol 9438.22m
2.12%
Getting your Trinity Audio player ready...

Lawyers for multiple U.S. states have filed a lawsuit accusing Meta (NASDAQ: META) of, amongst other things, consumer fraud, unfair competition, and deceptive acts and practices that may be causing harm to teenagers and children.

A group of 34 U.S. states filed a lawsuit against Meta, the company behind Facebook and Instagram, accusing it of engaging in manipulative and harmful practices, particularly in relation to how its artificial intelligence (AI) assisted recommendation algorithms in fostering addictive behavior in young people.

Filed with the District Court for the Northern District of California on October 24, the lawsuit was submitted by lawyers representing states across the U.S., including California, New York, Ohio, South Dakota, Virginia, and Louisiana. It accused Meta of multiple offenses, including consumer fraud, violating the children’s online privacy protection rule (COPPA), making false or misleading statements, and deceptive acts or practices.

“Over the past decade, Meta—itself and through its flagship Social Media Platforms Facebook and Instagram…—has profoundly altered the psychological and social realities of a generation of young Americans,” said the filing.

“Meta has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens. Its motive is profit, and in seeking to maximize its financial gains, Meta has repeatedly misled the public about the substantial dangers of its Social Media Platforms.”

The lawsuit also claimed that Meta has concealed how its platforms exploit and manipulate teenagers and children while ignoring the “sweeping damage” being done to their mental and physical health.

“In doing so, Meta engaged in, and continues to engage in, deceptive and unlawful conduct in violation of state and federal law,” said the filing.

The suit accuses Meta of a four-part “scheme” involving:

  1. Through its development of Instagram and Facebook, creating a business model focused on maximizing young users’ time and attention spent on its social media platforms;
  2. Designing and deploying harmful and psychologically manipulative product features to induce young users’ compulsive and extended Platform use while falsely assuring the public that its features were safe and suitable for young users;
  3. Publishing misleading reports boasting a deceptively low incidence of user harm; and
  4. Refusing to abandon its use of known harmful features, instead redoubling its efforts to misrepresent, conceal, and downplay the impact of those features on young users’ mental and physical health.

An example of the pernicious app features the lawsuit is particularly concerned about is the “Like” button, which the states’ representatives alleged promotes compulsive use and causes mental health harm for young users.

Another specific concern is Meta’s recommendation algorithms—which curate content for users based on previously harvested data—and their employment of AI technology. These AI algorithms, said the states’ lawyers, were designed to capture users’ attention and keep them engaged on the platforms.

“Meta’s algorithms apply not only to material generated by users but also to advertisements,” said the filing, which raised further concerns about fostering potentially damaging compulsive spending behaviors in youths.

As evidence, the document quotes Meta’s former Chief Operating Officer, Sheryl Sandberg, who in 2019 told investors: “[a]cross all of our platforms and formats, we’re investing in AI [artificial intelligence] to make ads more relevant and effective. In Q4, we developed new AI ranking models to help people see ads they’re more likely to be interested in.”

Lawyers for the various states are seeking various remedies, including substantial civil penalties.

Meta said in a statement it was “disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.”

AI regulation on the horizon

The timing of this multi-state lawsuit voicing concerns about the misuse of AI algorithms seems particularly prescient, coming as it did a day before a report suggesting U.S. President Joe Biden was about to unveil a much anticipated executive order on regulating AI.

Published by The Washington Post on October 25, the report cited several people familiar with the matter and said the executive order would require “advanced AI models to undergo assessments before they can be used by federal workers.”

According to the report, the executive order is issued on Monday and will mandate that AI models face stringent evaluations before they’re used by federal personnel. It also aims to ease the barriers for highly skilled workers seeking to immigrate to the U.S. in order to boost the country’s technological competitiveness.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: AI truly is not generative, it’s synthetic

Recommended for you

Sch. Post test

Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum

November 7, 2024
Post with chaching

Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum

November 4, 2024
Advertisement