2 min read

Open Source Projects Drowning in AI-Generated "Slop" Bug Reports

Software vulnerability submissions generated by AI models are causing major headaches for open-source project maintainers, creating a deluge of low-quality reports that are diverting time and resources from actual security improvements. This issue, highlighted by The Register, is prompting calls for changes in how bug hunting is conducted within the open-source community.

"We've entered a new era of slop security reports for open source," wrote Seth Larson, security developer-in-residence at the Python Software Foundation, in a blog post. "These reports appear legitimate at first, requiring time to refute, and should be treated as potentially malicious."

Larson's concerns are echoed by Daniel Stenberg, maintainer of the Curl project, who continues to grapple with AI-generated "slop" bug reports nearly a year after raising the issue.

The problem stems from the increasing ease with which AI models can generate large volumes of text, including bug reports. While these reports may appear legitimate, they often lack the necessary depth and accuracy, requiring significant time from already overstretched open-source maintainers to evaluate and dismiss.

"Spammy, low-grade online content existed long before chatbots," wrote Stenberg in response to a recent bug report. "But generative AI models have made it easier to produce the stuff. The result is pollution in journalism, web search, and of course social media."

For open-source projects, this AI-assisted bug report deluge is particularly problematic because it burdens volunteer security engineers, who are already in high demand.

Larson, speaking to The Register, highlighted the potential impact on open-source project sustainability. "Whatever happens to Python or pip is likely to eventually happen to more projects or more frequently," he warned. "I am concerned mostly about maintainers that are handling this in isolation. If they don't know that AI-generated reports are commonplace, they might not be able to recognize what's happening before wasting tons of time on a false report."

He argued that the open-source community needs to address this issue proactively, calling for increased funding and support for security work and emphasizing the need to encourage greater participation from reliable contributors.

"I am hesitant to say that 'more tech' is what will solve the problem," Larson said. "I think open source security needs some fundamental changes. We should be answering the question: 'how do we get more trusted individuals involved in open source?'"

To mitigate the current influx of low-quality reports, Larson urges bug submitters to only submit reports that have been verified by a human and to refrain from using AI for this task, as "these systems today cannot understand code." He also calls on platforms that handle vulnerability reports to implement measures to prevent automated or abusive report creation.