Katerina Holmes
Families across the country are yet again advocating for stronger protections for teen victims facing the recent circulation of AI-generated nude images of themselves. Their examples provide a disturbing reminder of the myriad ways AI is harming children.
Yet, lawmakers continue to overlook a critical aspect of AI safety: that the data that children produce by merely being online is what powers harmful AI to begin with. Every seemingly benign photo and video uploaded, tagged or commented on by a young user has likely played a role in the development of AI, including the kind that generates deepfake child abuse materials.
As young internet users navigate online spaces, their data becomes fodder for advanced algorithms and feeds an industry that profits from their “digital labor.” As cries for child online safety grow louder, and are heavily contested, it is important to highlight that the very production of data is a form of labor that puts all children at risk.
Data labor is the notion that every digital interaction represents a form of revenue-generating work. Each day, humans generate about 2.5 quintillion bytes of data, yet this labor remains uncompensated despite it producing record-breaking profits for tech companies. Beyond social media, children perform data labor through texting, using educational apps or engaging with their schools’ learning management platforms.
At Harvard University, my research on the impact of AI on child development has revealed how algorithm design and digital norms shape young peoples’ beliefs and behaviors. And as a parent, my work highlights the challenge of safeguarding my own child from inadvertently fueling big tech’s data-driven monopolies.
The relentless extraction from young data producers also exposes children to harassment, mirroring a hazardous digital worksite. Their data can be used to manipulate their consumer choices, shape their worldviews and groom them for more nefarious purposes. The ubiquitous deployment of facial recognition in public spaces, including schools, exposes them to state and corporate surveillance that can be misused to racially profile and track them in real time.
To be sure, ending the exploitation of children’s data should not overshadow the harrowing conditions of children facing labor rights violations globally. Children mine cobalt in the Democratic Republic of Congo for our smartphone batteries, extract coltan in Central Africa for device capacitors, participate in hazardous gold mining for electronic connectors, and labor in e-waste recycling centers in Asia, all while their condition is too often normalized and disregarded. This neglect is caused by the same economic forces that exploit American children for behavioral data mining.
And while children aren’t necessarily coerced into generating data, their engagement with online platforms implies an agreement to terms and conditions that are difficult to comprehend, blurring the lines of informed consent. Moreover, many data-driven technologies have become indispensable for children to connect with their communities, families, and schools, making data labor an unavoidable reality.
Child online safety must be redefined, and lawmakers must see children not just as online consumers or potential victims of AI harms, but as active data laborers. Online safety policies should include provisions for compensating young data laborers, with one proposal being a universal basic income funded by corporate tech profits. National legislation to curb surveillance and data extraction is essential. Reframing our perception of child labor and enacting these changes holds the promise of building a digital realm that’s both safe and equitable for our youngest digital denizens.
This column was produced for Progressive Perspectives, a project of The Progressive magazine, and distributed by Tribune News Service.