Labor Department Launches ‘Make America AI-Ready’ Course Amid Ethics Questions Over Corporate Partnerships
Why It Matters
The federal government’s push to expand AI literacy among American workers is raising questions about corporate influence in public programs, government ethics rules, and whether such training adequately addresses real workforce disruptions driven by artificial intelligence. The course, delivered nationally via text message, has drawn attention from educators, labor advocates, and ethics watchdogs alike.
What Happened
Late last month, the Department of Labor launched a new AI training program titled “Make America AI-Ready,” describing its goal as making artificial intelligence “feel less like a mystery and more like a tool you actually want to use.” The course is part of the Trump administration’s broader AI Action Plan and reflects the White House’s ongoing effort to position the United States as a global leader in artificial intelligence development.
The program consists of seven short daily modules delivered via text message, each taking fewer than 10 minutes to complete. Each module includes a lesson followed by quiz questions designed to build practical AI skills. The Department of Labor says the course is one of its contributions to implementing the administration’s national AI strategy.
The Trump administration has taken an aggressive pro-AI posture since taking office, placing Silicon Valley executives in advisory roles, working to preempt state-level AI regulations, and pushing for hundreds of billions of dollars in AI-related infrastructure investment across the country.
By the Numbers
- 7 daily course modules, each under 10 minutes in length
- 12+ AI tools listed in the lesson titled “Put AI to Work For You,” including products from OpenAI, Anthropic, Google DeepMind, and xAI
- $0 paid to technology partner Arist, which delivered the course for free under the White House’s Pledge to America’s Youth initiative — without a formal contracting process
- Hundreds of students per year enrolled in a comparable AI literacy course at the University of Texas at Austin, illustrating strong national demand
What Educators Are Saying
AI literacy experts offered a mixed but generally positive assessment of the course content. Peter Stone, chair of the Department of Computer Science at the University of Texas at Austin and co-creator of a similar AI course, said public literacy programs are increasingly necessary as AI becomes embedded in daily life. “I think it’s important for people to be able to cut through to what’s true and also be able to be literate with artificial intelligence, because they’re going to need it,” Stone said.
Mike Caulfield, a digital literacy expert at the University of Washington Bothell who reviewed the course materials, called it “a nice little course in general,” praising its coverage of context, specificity, and the need to verify AI outputs. However, Caulfield noted the course struck an overly optimistic tone in places, frequently emphasizing AI’s time-saving potential without acknowledging that early research suggests most workers have not experienced those benefits. In fields like software development, AI has in some cases led to “work intensification,” where employees tackle more complex tasks as AI handles simpler ones.
One specific concern flagged by reviewers: the course links to an external video suggesting users could ask an AI chatbot whether a foraged mushroom is safe to eat — advice that experts warn could lead to poisoning. The Department of Labor’s chief innovation officer declined to address that particular recommendation, and the department did not respond to follow-up requests for comment.
Ethics Questions Over Corporate Involvement
The course’s partnership with technology company Arist — which delivered the program for free — is drawing scrutiny. Craig Holman, an ethics and lobbying expert at the nonprofit Public Citizen, called the arrangement highly unusual. “A company running a government program and not getting paid by the government to do it … sounds exceedingly suspicious to me,” Holman said.
Holman also raised concerns about the course listing more than a dozen private AI products by name, arguing that doing so constitutes using public resources to promote private commercial interests — a potential violation of federal ethics laws. He noted the current administration has not been enforcing those statutes. The Department of Labor maintained that listing tools does not constitute an endorsement, with its chief innovation officer saying the agency identified “a diverse number of different tools and companies that are out there” for Americans to consider.
Adding to the confusion, one tool listed in the course — the data visualization platform DataWrapper — does not use AI in any capacity, according to the company itself.
Zoom Out
The federal government’s embrace of AI workforce training reflects a national conversation about how automation will reshape the American labor market. Several states have launched their own AI readiness initiatives, and Congress has debated the scope of federal involvement in AI governance. The Trump administration has consistently favored industry-led solutions over regulatory frameworks, a posture that has drawn both praise from business groups and criticism from labor advocates who say workers need stronger protections, not just digital skills training. The broader debate over how federal resources are deployed and withheld continues to shape trust in government-sponsored programs.
What’s Next
The “Make America AI-Ready” course remains publicly available via the Department of Labor. Ethics watchdog groups have signaled continued scrutiny of the corporate partnership arrangement, and it remains to be seen whether the Justice Department will examine whether any federal ethics statutes were violated. Labor advocates are expected to press for additional course content addressing workforce displacement, job loss, and worker protections as AI adoption accelerates across American industries.