The second Donald Trump Administration is working to accelerate the use of artificial intelligence in education, including by aiding its adoption in K–12 schools. In March 2026, the Administration released “A National Policy Framework for Artificial Intelligence,” that calls on Congress to “protect children” and “empower parents” to monitor their children’s use of AI.[REF] The framework does not address education specifically. Rather, the document offers broad guidance to Members for drafting legislation, suggesting that officials “establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services” and “require AI platforms and services likely to be accessed by minors to implement features that reduce the risks of sexual exploitation and self-harm to minors.”[REF]
This guidance will be useful for lawmakers. However, researchers still do not know enough about AI’s capabilities—or how to protect students from its potential harms—to justify widespread adoption of AI in classrooms.
The Trump Administration is rightly concerned about getting left behind in the AI race, but policymakers must ensure that speed does not come at the expense of child wellbeing, parental authority, or educational integrity. If technologists and educators are not careful, they could unleash a harmful digital contagion on unsuspecting children—much as nefarious elements of social media do.
The U.S. Department of Education seeks to “expand the understanding of artificial intelligence” and “expand the offerings of AI and computer science education in K–12 education.”[REF] As the department proceeds with this effort, it needs to weigh the unique challenges and threats AI poses to student learning, not just AI’s possible benefits. This will require distinguishing among categories of AI systems, what they do, and their different risk profiles.
Policymakers should not treat AI as a single, uniform technology. In fact, the very efficiencies that AI offers the business world can undermine key educational goals, such as developing and guiding student originality and creativity and teaching students to conduct their own rigorous research and analysis. AI tools designed to supplant cognitive effort can undermine rather than support student growth. As the Education Department partners with state and local education agencies to expand educational content on how to use AI or reviews grants from education institutions that include AI in course content, it should consider AI’s potential pitfalls and threats with respect to education. Likewise, as state and local policymakers develop policies and private education vendors create AIbased instructional content, all should be aware of those same pitfalls and threats.
The Promise of AI in Education
Recent advances in artificial intelligence are profound. Across business, government, educational institutions, and society, AI can gather and analyze data almost instantly. It can also boost efficiency in industry and research. These benefits have not yet reached their limits, and they are not the same in every setting.
On the positive side, AI may help educators by enabling more personalized learning plans and by helping them respond more quickly to student needs through online platforms. AI systems also hold promise for financing education. Research on AI financial detection systems is ongoing, but early returns find that AI systems can scan expense reports quickly, identify anomalies, help eliminate inefficiencies, and detect fraud.[REF]
Nevertheless, education’s objectives remain the same: helping students to succeed in school and in life. To the extent that AI helps to advance that goal, it is a welcome development. Not all AI systems offer the same educational value, though, and not all pose the same level of risk.
In July 2025, the Department of Education issued a proposed rule to extend currently authorized discretionary grant programs to support AI tools in education. As suggested in the proposal, AIpowered platforms that analyze student progress, identify learning gaps, and tailor support to individual needs may enhance teaching and learning. Allowing teachers to use AI tools for administrative work would also free more of their time, attention, and energy for students. It could also help teachers to deepen their own subjectmatter expertise and become even more effective.[REF]
The draft rule’s proposal to provide professional development in foundational computer science and AI and to prepare educators to teach AI in standalone computer science courses is a strong example of technology education (tech ed).[REF] By contrast, the draft rule’s proposal to provide professional development on integrating AI into educators’ subject areas—which in practice means integrating AI into all areas—seems contrary to the evidence produced by current research.[REF]
Lessons from the Failed Social Media Experiment and Adoption of Ed Tech
Social media and the widespread adoption of Chromebooks are critical examples of failure to promote learning and protect students from harm in K–12 classrooms. The social media experiment has clearly been a failure. Rising rates of anxiety, depression, and social isolation among youth directly correlate with the adoption of social media and increased screen time.[REF] Social media today rely on recommendationbased algorithms designed to hold users’ attention for as long as possible. More time on these platforms leads to more profit for companies.
Children are especially vulnerable to these addictive features. Teens who spend more than three hours a day on social media face twice the risk of poor mental health outcomes. The average time teens spend on social media exceeds four hours per day.[REF] Many AI systems—especially interactive, conversational, or gamified systems—use design features that resemble the engagementmaximizing mechanisms used in social media. These findings provide a cautionary lesson with respect to how AI platforms can harm children.
Perhaps the best-known widespread adoption of education technology (ed tech) in schools involves Google Chromebooks. Google provides laptops (commonly known as Chromebooks) at a discount to schools and offers free classroom apps. These perks have led to widespread adoption of its products. Google began to pursue public schools in 2011.[REF] By 2017, more than half of the country’s K–12 student population (more than 30 million children) were using such Google apps as Google Classroom, Google Docs, and Gmail.[REF]
Because Google and its parent company Alphabet are forprofit entities, Google did more than provide a service to schools nationwide. It also expanded and captured a young customer base. Google collects personal data from users for targeted advertising. That generates revenue and allows the company to provide apps without a user fee.[REF] Chromebooks have also increased children’s screen time and online access. Much of that use is entertainmentfocused and unrelated to education. This takes time from instruction and assignments and can allow access to inappropriate content.
Schools that adopt Chromebooks and other ed tech often require students to use the products and services. Schools may also refuse to allow parents to install external filters or blocking software.[REF] Discounted or free ed tech products may seem like deals school administrators cannot refuse, but these products entail compromises that must be evaluated.
Policymakers in most states are responding to the socialmedia and phoneuse problems by banning the use of personal phones and devices in schools. Thirty states have laws or executive orders that ban the use of cell phones in schools for the entire instructional day,[REF] and 11 states have some restrictions.[REF] These policies help, but their impact is limited if schools provide devices for use throughout the school day.
The Department of Education, state agencies, and schools should carefully consider the drawbacks of broadly introducing even more ed tech. Addictive algorithms and tools should not be the source of classroom engagement. Schools should be places where students learn and build an attention span that is independent from entertainment or gamified educational media.
AI’s Impact on Child and Teen Development
Children and teens need stricter limits on screen time than adults need because of their stage of psychological and mental development. This means students should not spend most of the school day behind screens—whether the tools are AIrelated or not; “[s]everal studies have indicated that increased screen time duration could be associated with lagged development, psychosocial symptoms, obesity, sleep disorders, and cardiovascular disease.”[REF]
Children and teenagers ages 10 to 19 also undergo a highly sensitive period of brain development. According to the U.S. Surgeon General, “[t]his is a period when risk-taking behaviors reach their peak, when well-being experiences the greatest fluctuations, and when mental health challenges such as depression typically emerge.”[REF] During these years, brain development is vulnerable to social pressures, peer opinions, and peer comparison. Teens may also experience greater emotional sensitivity to the communicative and interactive nature of many online platforms. The Surgeon General has warned about these concerns in the context of social media, but they must also be considered with other interactive and addictive online platforms, including gamified and interactive AI education tools.
AI and School Assignments
The Massachusetts Institute of Technology (MIT) conducted a study that divided 54 students into three groups. Each group was instructed to write essays, and researchers observed their neural brain activity across multiple essaywriting sessions. The first group was directed to use only ChatGPT. The second group was allowed to use a search engine without AI functionality such as a chatbot. Members of the third group were prohibited from using any tools and had to rely on their own brain power. MIT researchers found that brain connectivity decreased for students who used online tools. The greatest decrease was observed among the students who used ChatGPT.
The researchers also found that students who were first tasked with using only their own brain for essay writing performed better the second time when using ChatGPT. This finding suggests that there is value both for learning and for mental development in performing tasks without AI or online tools. It also raises a practical question: Would students be better prepared for higher education or an AIrelated workforce if their education omitted early adoption of generative AI tools?
The study results also suggest a correlation between reliance on generative AI and a reduction in critical thinking skills. A lack of critical thinking, reasoning, and retention would leave students less equipped for the workforce. The goal of school assignments is not mere completion: It is the aptitude and depth of knowledge that students gain. Students are more likely to obtain that aptitude and depth through personal hard work and effort.[REF]
The Dangers of AI in Education
The Department of Education, along with state and local education officials and colleges and universities, is designing teacher training programs to prepare educators to use AI. As they proceed, officials and postsecondary researchers should share research that alerts teachers to the bias that generative AI can show in its results. A review of the research on AI programs finds that “[d]espite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely false—an anomaly known as ‘hallucination.’”[REF] Other reviews have shown that Large Language Models (LLMs) such as ChatGPT can produce politically biased results in response to certain prompts.[REF]
Research continues to identify political bias in AI results as well as racial and sex-related bias.[REF] Crucially, research has also found that such bias can change users’ opinions.[REF] Research on LLMs released in 2025 from Stanford used a large dataset of AI inputs and a sample of users from the political Right and Left. The researchers found that users—even users on the Left—consistently perceived a Left-leaning slant from LLM responses.[REF] Notably, they also found that AI systems “lack the capability to reliably evaluate ideological content in a manner congruent with human judgment.”[REF] This means that AI systems may not recognize their own bias. For now, that limits self-correction. The Education Department should encourage partnering entities to include information about the potential for such bias in educational content.
Educators also should be taught how to use plagiarism-detection programs such as Turnitin. Surveys find that an increasing share of teachers are using these tools. One survey found that 68 percent of teachers reported using an AI-detection tool during the 2023–2024 school year.[REF] Educators are also disciplining more students for using AI to cheat (for example, by directly copying text from a generative AI result).
This suggests that more students are trying to use AI tools to cut corners. Another survey found that 89 percent of students admitted to using ChatGPT for homework assignments.[REF] With more students trying to copy results produced by ever more intelligent AI systems, educators will need the most up-to-date information on how to detect AI-produced content in student work. Assigning in-class essays is a viable if incomplete solution. It may reduce cheating, but it can also limit the range of assignments and learning experiences educators can offer students.
Finally, educators, parents, and students should be aware of the personal data collected by online websites, including AI tools. Researchers at Stanford’s Institute for Human-Centered Artificial Intelligence told a university publication that:
AI systems are so data-hungry and intransparent [sic] that we have even less control over what information about us is collected, what it is used for, and how we might correct or remove such personal information. Today, it is basically impossible for people using online products or services to escape systematic digital surveillance across most facets of life—and AI may make matters even worse.[REF]
These researchers also warn that generative AI tools “trained with data scraped from the internet may memorize personal information about people, as well as relational data about their family and friends.”[REF] This creates the risk of “spear-phishing—the deliberate targeting of people for purposes of identity theft or fraud.” They explain that a resume or photograph could be “repurposed for training AI systems, often without our knowledge or consent and sometimes with direct civil rights implications.”[REF]
Policymakers should require that companies creating and maintaining AI systems require that users make an affirmative choice—deliberately allow—their information to be collected and shared. The Education Department should prioritize partnerships with public education institutions and private education services that seek to protect personal data and educate teachers and students about the data that can be collected when using AI services.
Funding for AI in Education
While the Department of Education is expanding the use of existing discretionary grant programs without extra funds, these authors are concerned about where this leads. The concern is that expansion today will invite both higher appropriation requests and higher authorization levels tomorrow.
The federal government contributes only 8 percent of elementary and secondary education funding.[REF] Higher appropriation requests would further strain government borrowing and the taxpayer dollar. They would also increase the national debt. Naturally, tech companies will seek to profit from this proposal. For that reason, the department should vet all companies, their products, and associated claims to prevent waste, fraud, and abuse.
Policy Recommendations
As noted, it is important that policymakers not treat AI as a single, uniform technology. The efficiencies that it offers the business world can undermine key educational goals. To the extent that AI helps students to succeed in school and in life, it is a welcome development, but not all AI systems offer the same educational value. As relevant agencies formulate policies to deal with AI, they should therefore:
Maintain and respect states’ role and responsibility in overseeing education curricula and standards, including for AI. “Education,” as the U.S. Department of Education appropriately says, “is primarily a State and local responsibility in the United States.”[REF] In March 2025, President Trump signed an executive order to close the Department of Education in order to return authority to the states.[REF] State and local governments by their very nature can better shape education to reflect community values, meet various economic needs, and respond to parent and student demand signals.
A national policy framework for AI cannot supersede the states’ role in determining AI’s role—or lack of role—in education. The December 2025 executive order seeks to mitigate a 50-state patchwork of different regulatory regimes for AI and to challenge state AI laws that are inconsistent with the policies set out in the executive order.[REF] A federal standard for regulating AI—covering frontier AI lab requirements, deployment of AI technology, reporting requirements, and permitting reform for AI infrastructure—can be judicious. Even so, superseding states’ rules and regulations for AI in education would be unlawful. The executive order did not include a carveout for state laws related to AI in education, but any federal standard from Congress or pursued by the executive branch should contain such a carveout.
Advance tech ed over ed tech. Our students can achieve AI literacy through tech ed that focuses on teaching fundamentals of the technology and the skills needed to use it. AI is advancing rapidly and changing society in the process. Tech ed therefore plays a key role in preparing students to deal with those changes.
Ed tech, by contrast, is a form of pedagogy that relies on technology to teach and forces adoption of that technology. Tech ed courses, such as computer science, deepen students’ knowledge and understanding of technical subject areas and expose them to skills required for certain career paths. Tech ed focuses on curriculum and is independent of technology adoption. It is also exempt from industry’s profit-driven motives. Ed tech is driven by tech companies whose values and goals may not align with those of parents and schools. Tech ed shares the end goals of other disciplines taught in schools: to educate and build competence in a subject matter. Ed tech fosters dependence on certain technology to learn and do any given task. Tech ed rightly acknowledges and accepts the emergence of technology like AI. It encourages modernizing course curricula accordingly without surrendering the key instructional role of teachers and parents. Nor does it handicap students’ ability to think critically, concentrate, and learn independently of tech tools.
Protect parents’ rights. Parents must be informed, and their consent must be obtained before schools introduce sweeping changes in the use of technology by their children. The Department of Education’s proposed rule on AI and education is for discretionary grants, not mandatory implementation. The department should add safeguards and stipulations for parents’ rights to the grant awards. Parents must be allowed to opt out of tech tools, applications, and data collection. They should be permitted to add additional layers of protection, such as screen time limits, internet filters, and content-blocking software, to devices that their children use, including for education. Transparency in tech ed curricula and the uses and capabilities of tech tools is critical. Parents need to know what their children are learning and how they may be affected by it. The views of parents who are more restrictive of their children’s use of technology and screens must be respected and fully considered.
Guidance for ed tech companies and schools promulgated by the Federal Trade Commission during the coronavirus erroneously states that:
[S]chools can consent on behalf of parents to the collection of student personal information—but only if such information is used for a school-authorized educational purpose and for no other commercial purpose. This is true whether the learning takes place in the classroom or at home at the direction of the school.[REF]
However, nowhere in the text of the law itself does it say that schools can consent on behalf of parents.
The Children’s Online Privacy Protection Act requires online platforms to obtain “verifiable parental consent” before collecting, using, or disclosing personal information from a child under 13 years of age.[REF] This law must be enforced for ed tech companies’ collection of data for students under 13. This means that parents, not schools, must provide their consent for their children before they participate in any ed tech platforms.
Restrict chatbot teaching assistants and tutors. Children need in-person human connection and relationships. The interactions and conversational abilities of AI assistants with children need to be greatly restricted and monitored. Recent research has shown that chatbots can cause children to harm themselves or be harmed through interaction with chatbots. Chatbots have been found to steer conversations consistently toward sex even when unprompted—and even when the user has indicated that he or she is a minor.[REF]
Chatbots also have encouraged self-harm and suicide.[REF] In April 2025, Stanford University and Common Sense Media released research findings on AI companions and concluded that they should not be used by minors. The research found that restrictions based on terms of service, such as age restrictions, were easily circumvented; that harmful and sexual exchanges were easily elicited; that self-harm advice proliferated in conversations; and that AI companions regularly purported to be real, to be conscious, and to experience emotions. A study conducted in 2025 by OpenAI and MIT Media Lab found that high daily usage of AI chatbots increased feelings of loneliness and dependence on the bot. The study revealed that those who tended to form stronger emotional attachments and put higher trust in the chatbot experienced greater loneliness and emotional dependence.[REF]
These findings cement the need to study the effects of engaging and interactive chatbots, including teaching assistants and tutors, before deploying them at scale in schools. If some virtual assistants and tutors are permitted, they should be non-interactive and unable to initiate or continue conversations. Their training data should be limited to input needed for a particular subject or grade level. That limit would also restrict the extent of their output. For example, a teacher in a classroom or a librarian might allow student access to a chatbot that can only provide information and sources to a search query or help a user to navigate a website.
A chatbot that was only trained on algebraic data and can only prompt questions for or respond with answers to algebraic problems may also be permissible, But it should not interact in such a way that a child can reasonably form an emotional bond or a pseudo relationship. Moreover, even these examples should only be a last resort. Students who are struggling in a subject, behind developmentally, or disabled need more support from their parents and teachers, not less. They are the most likely to need in-person, face-to-face instruction and problem solving. Their education should not be relegated to an AI program—even if it is one that supposedly is “tailored to their needs.”[REF]
Conclusion
The lessons from past federal education reforms, the widespread adoption of Chromebooks in schools, and the social media harms to children all point in the same direction. Policymakers and families should scrutinize proposals to adopt artificial intelligence in education, especially when those proposals rely on forprofit ed tech platforms. We know from experience that when powerful technologies are introduced without clear limits, safeguards often arrive too late.
As policymakers consider AI in education, they must take seriously the research on harms to children from addictive design features that are common across online platforms. These risks are not hypothetical. They can harm attention, mental health, and learning. Schools should respond with commonsense limitations and clear restrictions rather than assuming new technology will regulate itself.
Federal funding decisions also require caution. Grant awards must not become giveaways to the technology industry, which has strong incentives to secure customers early and keep them for life. That risk is especially acute when children are involved. Public dollars should support education itself, not subsidize unproven tools that may undermine parental authority or educational integrity.
The adoption of AI in education should prioritize tech ed over ed tech. Policymakers should maintain clear distinctions among administrative, analytical, instructional, and interactive AI systems. Parents’ rights and children’s needs must guide every decision. Evidencebased safeguards and appropriate limits on AI deployment should come before any expansion of funding or use.
Finally, while the White House’s policy framework calls for Congress to adopt appropriate safeguards for children and parents, state lawmakers’ authority over education curriculum and standards must be respected. That authority should extend fully to decisions about whether and how AI is used in classrooms. Integrating tech in education requires balancing innovation with caution. Protecting children, supporting families, and preserving educational integrity must remain the central goals.
Annie Chestnut Tutor is a Policy Analyst in the Center for Technology and the Human Person at The Heritage Foundation. Jonathan Butcher is Acting Director of the Center for Education Policy and Will Skillman Senior Research Fellow in Education Policy at The Heritage Foundation.