The White House’s new policy framework for regulating generative artificial intelligence, released Friday, covers many areas, but one thing is clear: President Donald Trump wants the federal government to set the rules. And those rules appear to fall far short of what consumer and privacy advocates argue is necessary.
The generative AI revolution has been underway for years, and US legislation is slow to catch up. This is despite the growing awareness of AI’s harms and challenges: chatbots’ dangerous impacts on mental health and child development, the widespread legal wrangling over the copyright protections, the dangerous spread of deepfakes and AI-powered scams, to name a few.
Sen. Marsha Blackburn introduced the new policy package, called The Trump America AI Act, in Congress on Thursday. The Tennessee Republican’s bill is an attempt to codify a vision based on Trump’s 2025 AI Action Plan, while delving into more legal specifics and providing guidance on implementing new laws (or changing existing ones).
Trump has maintained that the federal government should be responsible for regulating the AI industry — and that requiring AI companies to comply with 50 different sets of state laws would prevent the US from «winning» the global AI race. However, a proposal to temporarily ban states from regulating AI failed back in July, when it was removed at the last minute from the massive budget bill, known as the «One Big Beautiful Bill Act.»
Now, the White House is doubling down on its claim to be in charge, with a few exceptions. The plan addresses some of the biggest concerns people have about AI: job loss, copyright chaos for creators, rapidly expanding infrastructure such as data centers and the protection of vulnerable groups like children. But critics say it doesn’t go far enough to regulate the fast-growing AI industry.
«It is light on protection and heavy on promotion of dangerous AI systems,» Alan Butler, president and executive director of the Electronic Privacy Information Center, said in a statement. «The American people deserve better, and Congress should do better than this.»
The White House’s new proposed AI laws
The White House’s 2026 AI proposal says Congress should not create a new governing body to oversee AI rules, but should let existing agencies and subject-matter experts regulate as they see fit.
Protecting children: This is one area where the federal government won’t prevent states from creating laws. And many state governments are already leading the charge, especially in regulating romantic or companion chatbots.
The plan highlights protecting kids from AI-powered deepfakes, a huge issue highlighted in AI creating child sexual abuse material. Shielding young people from the ill effects of AI is an ongoing battle, with several high-profile cases of teenagers using AI for self-harm and suicide.
Blackburn’s policy plan includes general language related to kids’ online safety. Existing bills like the Kids Online Safety Act and the Children’s Online Privacy Protection Rule are, theoretically, designed to protect kids, but advocates and tech experts say they could create a chilling effect on free speech and lead to censorship.
Though Trump’s AI framework addresses censorship, it’s limited to preventing AI companies from including ideological or partisan bias in their products. Trump has previously railed against what he calls «woke» AI, a term the president and his allies have used to attack concepts like diversity, equity and inclusion.
Job loss: It’s not just translators and data entry folks who are worried about losing their jobs to AI — legacy tech workers like coders and engineers are, too. There have been a lot of concerns about AI disrupting the workforce, with retail giants like Amazon laying off thousands of employees in the name of AI efficiency. The White House says it should use «nonregulatory» methods to focus on youth development and AI workforce training.
Infrastructure: In line with Trump’s previous AI Action Plan, the framework calls for states and local governments to streamline data center construction and operation. These facilities are increasingly controversial, with nearby residents reporting environmental damage and strain on their existing electrical grids, creating higher electric bills.
Several big tech companies recently agreed to foot the bill for any higher electricity costs, but there’s no way to enforce the voluntary pledge.
Copyright: Whether the use of copyrighted materials in AI training is fair use or copyright infringement is one of the biggest legal issues of the AI age. The plan reiterates the administration’s position that AI companies are covered by fair use — meaning they wouldn’t have to obtain permission or pay for copyrighted content when creating their models.
But, given the ever-growing number of lawsuits asking the judiciary the same question, the federal government should allow those cases to play out. So far, limited cases with Anthropic and Meta have carved out narrow victories for tech companies, not authors.
The framework document hints that the federal government could become a future licensing partner for AI companies, stating that it should «provide resources to make federal datasets accessible to industry and academia in AI-ready formats for use in training AI models and systems.»
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Does the White House plan do enough?
Tech industry groups praised the administration’s proposals, while consumer advocacy groups offered skepticism at best.
In a statement backing the plan, the Consumer Technology Association supported a single set of rules for the entire country.
«AI can and will make us better, and we agree that children need special protection, First Amendment rights are paramount, harmful deep fakes should be regulated, and Congress should not act to restrict AI platforms from relying on fair use protection,» the tech industry trade group said.
But according to Samir Jain, vice president of policy at the Center for Democracy and Technology, the government’s playbook is rife with internal contradictions. While it calls for the federal government to preempt state rules and laws on AI development, it also says the federal government shouldn’t undermine state authority.
«The White House’s high-level AI framework contains some sound statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to grapple with key tensions between various approaches to important topics like kids’ online safety,» Jain said in a statement.
Ben Winters, director of AI and data privacy at the Consumer Federation of America, said the proposal prioritizes Big Tech over consumers.
«It’s encouraging to see some stated desires to protect people from AI-generated scams and data abuse of minors, but it’s not enough,» Winters said in a statement. «We need to see money where their mouth is on the protections — more money for consumer protection agencies at both the federal and state levels. So far, they’ve done nothing but cut and hamstring them.»

