YouTube is preparing to update its policies in order to restrict the ability to generate revenue from “inauthentic” content, including mass-produced and other repetitive videos. The company would update its YouTube Partner Program (YPP) Monetization policies on July 15, and give out more detailed guidelines about the types of videos that can and cannot be monetized. While further details are yet to be released, a page on YouTube’s Help documentation explains that creators have always been required to upload “original” and “authentic” content.
While some YouTube content creators have expressed concern over whether the update would limit their ability to monetize certain types of videos, like reaction videos or those featuring clips, a post from YouTube Head of Editorial & Creator Liaison Rene Ritchie confirms that would not be the case. Ritchie stated via a video update that the change is just a “minor update” to YouTube’s long standing YPP policies and is designed to better identify when content is mass-produced or repetitive.
READ: Figma moves closer to an IPO that could raise $1.5 billion (July 2, 2025)
This comes amid a rise in “AI slop” videos. AI slop is a term referencing low quality videos generated by artificial intelligence. For instance, it’s common to find an AI voice overlaid on photos, video clips, or other repurposed content, often created using text-to-video AI tools. Channels filled with AI-generated music often have millions of subscribers, and AI-generated fake videos about news events, like the Diddy trial have gained views in the millions.
One notable case was of an AI-generated true crime documentary that went viral earlier this year, according to a 404 Media report, This documentary, luridly titled “Husband’s Secret Gay Love Affair with Step Son Ends in Grisly Murder,” racked up two million views before it was revealed to be entirely made up. In another case, YouTube CEO Neal Mohan’s likeness was used in an AI-generated phishing scam on the site, despite having tools in place that allow users to report deepfake videos.
YouTube also clarified that the use of AI would not entirely prevent videos from being monetized. Videos using AI to improve content would still be eligible if it meets all other requirements.
This development has generated mixed responses. While many have praised these measures that cut down “slop” content that can damage YouTube’s reputation and value, others have criticized the platform for allowing such content to grow in the first place. Some have pointed out how YouTube CEO Neal Mohan championed a new tool for generating Shorts “from scratch,” which could give rise to such slop content. This was also considered ironic since the tools used for AI models, including Google’s Veo 3, were trained on YouTubers’ content without their express permission. It also remains unclear what exactly constitutes “repetitive content” as mentioned in the updated guidelines.

