For more than a century, corporations have used school partnerships as a means to reach student and family consumers. As far back as the 1920s, schools were so inundated with gifts of branded calendars, charts, maps, posters, thermometers, and samples that the National Education Association (NEA) commissioned a report to help teachers deal with “propaganda in the schools.” In the 1970s, my elementary school in New York State became a site for one of Scholastic’s popular school book fairs to encourage my classmates and me to buy piles of books published by the company. Thirty years later, a local Jamba Juice franchise partnered with my children’s middle school to sell its branded smoothies at their soccer games.
And now, among the corporations eager to reach students are massive tech companies which seek to profit off of student engagement under the guise of educational enrichment. Their access to schools enables them to advertise to and collect data from students as they do their schoolwork on school-provided or personal devices. The data students unwittingly provide can then be sent to third parties for use in marketing, product development, and other unknown purposes. In 2022, Internet Safety Labs, an independent product testing and research organization, examined apps used in or recommended by schools, including explicitly educational apps, YouTube, and news apps like CNN, The New York Times, and The Wall Street Journal. The study found that 96 percent of the apps used or recommended by U.S. schools share children’s personal information with third parties. It recommended that schools avoid 78 percent of the apps because they contain advertising and/or share data with companies that monetize it, such as data brokers and data aggregators like Meta and Amazon. The more time students spend on these apps, the more they generate data that can be siphoned, used, and sold. When schools adopt an app and require or recommend that their students use it, they entrust their students to content and business practices that they can’t see or control, and that might cause harm to the young people in their care.
Meta, the parent company of Facebook, Instagram, and other online platforms, offers a variety of “collaborations” and “partnerships” that induce schools and school-related organizations to encourage young people’s use of its products, particularly on Instagram. The company donates to and collaborates with the National Parent Teacher Association, for example, to “empower parents to confidently manage” their teens’ use of screens and Instagram. School clubs and teams often have Instagram accounts on which they post photos and events, and communicate with their members.
In April 2025, Meta started recruiting U.S. middle and high schools to participate in Instagram’s new School Partnership Program, inviting schools to partner with Instagram to help combat online bullying. Because many students are on Instagram, Meta suggests that schools should be too, to monitor their students’ posts for bullying. But the partnership actually serves schools much less than it serves Meta.
Participating schools get a special “partner” banner for their Instagram accounts which, in effect, advertises for Instagram by indicating to their communities that they judge the platform to be a safe place for their students. Schools also get to do the work of identifying malicious content. If moderators of a school account identify such content, they can alert Instagram to review the post for removal. Through this strategy, Meta essentially tasks schools with identifying and reporting incidents of bullying and harmful speech—which is significant in light of the company’s decision earlier this year to abandon its own fact-checking process and loosen its content moderation guidelines on its various platforms. For its part, Meta pledges to expedite its review of content that partner-school accounts report for bullying, predation, or other harms.
The idea that Meta will review content reported by partner-school accounts sooner than other reported violations also implies a veiled threat: that schools that do not partner with the company will find themselves waiting longer for review. In offering these agreements, the company leverages an unspoken assumption that any school that cares about its students will accept the unequal partnership it has offered.
Meta, which operated as Facebook, Inc. from 2005 to 2021, has a long history of disregarding the safety of its young users. In their Congressional testimonies, whistleblowers and former employees Frances Haugen (in 2021) and Arturo Bejar (in 2023) documented numerous instances of the company failing to act in response to emotional harm to and sexual harassment of teenage girls on Instagram. Similarly, former Facebook policy director Sarah Wynn-Williams documents in her 2025 book, Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, how Facebook capitalized on teenagers’ indications of psychological vulnerability by sharing their data for targeted advertising. When a teenage girl deleted a selfie from her profile, for example, suggesting to the Facebook algorithm that she may be dissatisfied with her appearance, she would immediately receive an ad for beauty products on the platform.
Negligence and manipulations like these are not anomalies. Nor are they old news. In May 2025, Accountable Tech reported that even when Instagram’s so-called sensitive content controls were enabled for teen accounts, the platform’s algorithms recommended content related to racist rhetoric, pornography, and disordered eating. In September 2025, an analysis found that only eight out of forty-seven safety tools listed on Meta’s website actually worked as advertised, and thirty were either ineffective or no longer available for use. Also that month, former employees Jason Sattizahn and Cayce Savage testified to Congress that Meta suppressed research about the safety of Facebook Marketplace, the faith community partnerships the company launched during the pandemic, and virtual reality. With particular regard to children, Savage reported that she was prevented from researching threats to child safety in virtual reality products, and that Meta intentionally avoided gathering the data that would enable it to report the number of underaged children using its virtual reality headset.
In addition to these safety concerns, Meta has allowed Instagram to become inundated with video content generated by artificial intelligence (AI) that is often derogatory and disturbing, including of LeBron James and Diddy raping Steph Curry in prison, a demon monster eating a woman’s head, and someone advertising an iPhone case made of human skin. WIRED reported that generative AI has made it easy and lucrative for people to make money by creating horrifying videos designed to go viral. And 404Media reported in March that Instagram is actively encouraging the creation of such videos via a bonus program that pays creators for particularly engaging content. Through this arrangement, Meta makes money selling advertisements on content implemented by algorithms designed to prioritize whatever will keep users engaged on the site for as long as possible.
AI videos that can horrify and traumatize adults are even more problematic for teenagers, whose developing brains are more susceptible to their devastating effects. Instagram’s algorithms are designed to take advantage of teenagers, who are naturally exploratory, rebellious, and social, and who may lack the foresight or impulse control to stop themselves from clicking on and sharing violent or outrageous AI videos.
Meta’s use of algorithms in core operations conceals the extent to which it profits from business decisions that prioritize engagement over protecting the psychological health of young people. The human decision-makers at Meta function in an environment that emphasizes profit. The human decision-makers in schools do not. They can make informed choices about the products they buy and the partnerships they make. The argument that adults should leave young people to make their own decisions about social media is disingenuous. Young people increasingly understand the threats associated with social media, but their understanding is often overwhelmed by the compulsion to use it.
Rather than legitimizing Instagram by partnering with and promoting it, schools can opt out of the Instagram School Partnership—and out of using Instagram at all. This, like the growing movement to remove phones from schools, offers schools an opportunity to engage with students about the dangers of digital media—and support them by modeling that they, also, can opt out. Yes, many kids will still use Instagram. But at least their school won’t be leading them there.