
The Trump administration published its AI Action Plan, a 28-page document outlining proposed policies for everything from data center construction to how government agencies will use AI, Wednesday. As expected, the plan emphasizes deregulation, speed, and global dominance while largely avoiding many of the conflicts plaguing the AI space, including debates over copyright, environmental protections, and safety testing requirements.
Also: How the Trump administration changed AI: A timeline
“America must do more than promote AI within its own borders,” the plan says. “The United States must also drive adoption of American AI systems, computing hardware, and standards throughout the world.”
Here are the main takeaways from the plan and how they could impact the future of AI, nationally and internationally.
AI upskilling over worker protections
Companies within and outside the tech industry are increasingly offering AI upskilling courses to mitigate AI’s job impact. In a section titled “Empower American Workers in the Age of AI,” the AI Action Plan continues this trend, proposing several initiatives built on two April 2025 executive orders for AI education.
Specifically, the plan proposes that the Department of Labor (DOL), the Department of Education (ED), the National Science Foundation, and the Department of Commerce set aside funding for retraining programs and study the impact of AI on the job market.
Also: Microsoft is saving millions with AI and laying off thousands – where do we go from here?
The plan also creates tax incentives for employees to offer skill development and literacy programs. “In applicable situations, this will enable employers to offer tax-free reimbursement for AI-related training and help scale private-sector investment in AI skill development,” the plan clarifies.
Nowhere in the document does the administration propose regulations or protections for workers against being replaced by AI. By going all-in on upskilling without adjusting labor laws to AI’s reality, the Trump administration puts the onus on workers to keep up. It’s unclear how effectively upskilling alone will stave off displacement.
Government AI models may be censored
Multiple figures within the Trump administration, including the president and AI czar David Sacks, have accused popular AI models from Google, Anthropic, and OpenAI of being “woke,” or overly weighted toward liberal values. The AI Action Plan codifies that suspicion by proposing to remove “references to misinformation, Diversity, Equity, and Inclusion (DEI), and climate change” from the NIST AI Risk Management Framework (AI RMF).
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Released in January 2023, the AI RMF is a public-private implementation resource intended to “improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems” — similar to MIT’s Risk Repository. Currently, it does not include references to misinformation or climate change, but does recommend that workforce DEI initiatives be considered by organizations introducing new AI systems.
Also: How these proposed standards aim to tame our AI wild west
The AI Action Plan’s proposal to remove these mentions — however broadly defined — would effectively censor models used by the government.
Despite several logic inconsistencies on the protection of free speech, the same section notes that the newly renamed Center for AI Standards and Innovation (CAISI) — formerly the US AI Safety Institute — will “conduct research and, as appropriate, publish evaluations of frontier models from the People’s Republic of China for alignment with Chinese Communist Party talking points and censorship.”
“We must ensure that free speech flourishes in the era of AI and that AI procured by the Federal government objectively reflects truth rather than social engineering agendas,” the plan says.
State legislation threats may return
Earlier this summer, Congress proposed a 10-year moratorium on state AI legislation, which companies, including OpenAI, had publicly advocated for. Tucked into Trump’s “big, beautiful” tax bill, the ban was removed at the last second before the bill passed.
Sections of the AI Action Plan, however, suggest that state AI legislation will remain under the microscope as federal policies roll out, likely in ways that will imperil states’ AI funding.
The plan intends to “work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”
The language does not indicate what kinds of regulation would be scrutinized, but given the Trump administration’s attitude toward AI safety, bias, responsibility, and other protection efforts, it’s fair to assume states trying to regulate AI along these topics would be most targeted. New York’s recently passed RAISE bill, which proposes safety and transparency requirements for developers, comes to mind.
“The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation,” the plan continues, remaining subjective.
For many, state AI legislation remains crucial. “In the absence of Congressional action, states must be permitted to move forward with rules that protect consumers,” a Consumer Reports spokesperson told ZDNET in a statement.
Fast-tracking infrastructure – at any cost
The plan named several initiatives to accelerate permits for building data centers, which has become a priority as part of Project Stargate and a recent data-center-focused energy investment in Pennsylvania.
“We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape,” the plan says. The government intends to “expedite environmental permitting by streamlining or reducing regulations promulgated under the Clean Air Act, the Clean Water Act, the Comprehensive Environmental Response, Compensation, and Liability Act, and other relevant related laws.”
Given the environmental impact that scaling data centers can have, this naturally raises ecological concerns. But some are optimistic that growth will encourage energy efficiency efforts.
Also: How much energy does AI really use? The answer is surprising – and a little complicated
“As AI continues to scale, so too will its demands on vital natural resources like energy and water,” Emilio Tenuta, SVP and chief sustainability officer at Ecolab, a sustainability solutions company, told ZDNET. “By designing and deploying AI with efficiency in mind, we can optimize resource use while meeting demand. The companies that lead and win in the AI era will be those that prioritize business performance while optimizing water and energy use.”
Whether that happens is still uncertain, especially given the actively adverse effects data center pollution is having today.
Remaining Biden-era protections could still be removed
When Trump reversed Biden’s executive order in January, many of its directives had already been baked into specific agencies and were therefore protected. However, the plan indicates the government will continue combing through existing regulations to remove Biden-era relics.
The plan proposes that the Office of Management and Budget (OMB) investigate “current Federal regulations that hinder AI innovation and adoption and work with relevant Federal agencies to take appropriate action.” It continues that OMB will “identify, revise, or repeal regulations, rules, memoranda, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily hinder AI development or deployment.”
Also: The great AI skills disconnect – and how to fix it
The plan also intends to “review all Federal Trade Commission (FTC) investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation,” meaning that Biden-era investigations into AI products could come under revision, potentially freeing companies from responsibility.
“This language could potentially be interpreted to give free rein to AI developers to create harmful products without any regard for the consequences,” the Consumer Reports spokesperson told ZDNET. “While many AI products offer real benefits to consumers, many pose real threats as well — such as deepfake intimate image generators, therapy chatbots, and voice cloning services.”
Honorable mentions
Get the morning’s top stories in your inbox each day with our Tech Today newsletter.
(Except for the headline, this story has not been edited by PostX News and is published from a syndicated feed.)