Study Finds YouTube Algorithm Pushing AI-Generated ‘Junk’ Content to Children

YouTube Algorithm

A new investigation by The New York Times has revealed that low-quality, AI-generated videos are frequently being recommended to children by the YouTube algorithm — often disguised as educational content.

The report, published on February 27, analyzed 1,000 YouTube Shorts recommended to young viewers. Researchers discovered a troubling trend: many of the suggested videos were poorly produced, AI-generated clips masquerading as learning material for toddlers.

AI-Generated “Educational” Shorts Raise Concerns

According to the investigation, several channels claim to teach basic concepts such as the alphabet, animals, and early childhood skills. However, the content often includes:

  • Distorted animals and humans with unusual facial features
  • Extra limbs or unnatural body proportions
  • Chaotic, confusing visuals
  • Factually incorrect information

Most of these videos are around 30 seconds long — likely due to current AI video generation limitations. Despite containing misinformation and strange imagery, they are being presented as toddler-friendly educational material.

Issue Extends to YouTube Kids

The pattern was also identified on YouTube Kids, a separate section designed with enhanced parental controls. Researchers found that similar AI-generated content was still being recommended there, raising further concerns about the platform’s content moderation systems.

Google Responds

Following public backlash, Google — YouTube’s parent company — removed several of the identified channels from the YouTube Partner Program, preventing them from monetizing their content. Individual videos that violated platform policies were also taken down.

However, the findings highlight ongoing challenges in moderating AI-generated content at scale, especially when it targets children.