Combating AI Slop: A Framework for Effective AI Use

 

🚨 Combating AI Slop: A Framework for Effective AI Use


TL;DR: AI-generated content is flooding our workflows with diminishing returns. This page defines AI Slop, identifies where it happens, and provides actionable tools to filter signal from noise.

1. AI Slop is flooding orgs — 95% aren't seeing ROI, 40% more content YoY, and 53% of recipients are annoyed by low-quality AI output that shifts work downstream

2. Two main sources — Confluence pages (published without review) and PRs (AI-generated code without self-review)

3. Fix it with machine-first review — Use AI agent review, to review content before publishing/requesting human reviewers


🎯 The Decision

Adopt a "machine review before human review" workflow — AI catches the obvious slop so humans can focus on substance.


✅ The Action

ConfluenceRun pages through AI for summary before publishing. If it can't be summarized clearly → revise first. 
PRsRequire AI Agent review before requesting human reviewers.            



Longer version


Slop Metrics of this article

Word Count807 words
Reading Time5 minutes


Slop Score of this article: 87/100 ✅ PASS

Verbosity Ratio82/100 30%  24.6 
Action Item Presence95/10035% 33.3
Decision Clarity90/10035% 31.5
TOTAL

89.4

Scorecard Summary

───────────────────────────────────────────────

│                    SLOP SCORE: 87/100                   │

│  ████████████████████████████████████████░░░░░░  87%    ───────────────────────────────────────────────

│  ✅ PASS - Ready to publish                                                                                    

│  Thresholds:                                        

│  • 80-100: ✅ Publish                                  

│  • 60-79:  ⚠️ Revise recommended                      

│  • 0-59:   🚫 Requires revision before publish        

└───────────────────────────────────────────────


📉 The Problem

95%of organizations are not seeing ROI on AI investments
40% YoYincrease in volume of code and Confluence pages
53%of recipients report being annoyed by AI-generated work
38%report being confused
22%report feeling offended

What is AI Slop?

AI Slop (noun): Low-quality, AI-generated content that appears productive but adds cognitive burden rather than value. It shifts work from the creator to the recipient—who must now spend time and mental energy deciphering, validating, or diplomatically addressing subpar output.

"The cost of slop isn't just bad content—it's the hidden tax on everyone who has to deal with it."

🎯 The Real Cost

RecipientsTime spent figuring out what the author actually meant
ReviewersMental energy diplomatically addressing low-quality PRs
TeamsMeeting time re-explaining poorly documented decisions
OrganizationsEroded trust in AI tools, slower adoption of effective patterns

The paradox: AI was supposed to save time. Instead, poorly used AI shifts the time burden downstream—often to more senior or specialized people whose time is more valuable.


🔍 Where AI Slop Happens (Scope)

Confluence Pages✅ YesHigh volume, often unreviewed before publishing
Pull Requests✅ YesAI-generated code submitted without self-review
Email/slack❌ NoMicrosoft Copilot/slack AI summary already provides summarization
Blogs❌ NoBuilt-in summarize buttons available

🎭 Slop Scenarios

Scenario 1: The Wall of Text Confluence Page

What happens: Author uses AI to generate a 10-page design doc. Readers can't find the decision, the tradeoffs, or the action items.

Symptoms:

  • No clear structure or TL;DR
  • Repeats the same point in different words
  • Sounds authoritative but says nothing concrete

Impact: Readers skim, miss critical info, or skip entirely. Decisions get made in Slack instead.


Scenario 2: The Drive-By PR

What happens: Author uses AI to generate code, runs it once, and opens a PR without reviewing their own changes.

Symptoms:

  • Obvious bugs that a single read-through would catch
  • Inconsistent style with the rest of the codebase
  • Comments that describe what the code does (redundant) rather than why

Impact: Human reviewers become unpaid QA. Review fatigue leads to rubber-stamping.


Scenario 3: The "Looks Complete" Artifact

What happens: A doc or PR appears thorough—proper headings, good formatting—but lacks substance.

Symptoms:

  • Generic statements that apply to any project
  • No specific decisions, numbers, or tradeoffs
  • Reads like a template that was never filled in

Impact: False confidence. Teams proceed thinking alignment exists when it doesn't.


🛠️ Effective Tools & Mitigations

For Confluence Pages

AI Summary GateBefore publishing pages >1 screen, run through AI: "Summarize this in 3 bullets. What's the decision? What's the action?" If you can't answer, the page isn't ready.
Slop Score (Proposed)Automated scoring on publish: verbosity ratio, action item presence, decision clarity. Flag pages that score poorly for author revision.
Reading Time IndicatorDisplay estimated reading time. If >5 min, require a TL;DR section.

For Pull Requests

Machine Review FirstRequire AI review before requesting human reviewers. AI catches the obvious issues; humans focus on architecture and logic.
Self-Review ChecklistPR template requiring author to confirm: "I have read my own diff," "I have run the tests locally," "I can explain why each change exists."
Slop Detection BotFlag PRs with: no test changes, >500 lines with no description, or AI-generated commit messages ("Update file," "Fix bug").

📋 Recommended Workflow

  1. Write with AI assist
  2. Read your own page
  3. Ask: "What's the decision?"
  4. Run through AI Agents
  5. If summary is unclear → revise
  6. Publish
  1. Generate with AI
  2. Read your own diff
  3. Run tests locally
  4. Run AI Agents review
  5. If issues → fix first
  6. Request human review

🎯 Success Metrics

PR Review CyclesReduce average rounds from 3 → 2
Confluence Page EngagementIncrease read completion rate by 20%
Reviewer SatisfactionReduce "annoyed" responses from 53% → <25%
Time to DecisionReduce time from doc publish to team alignment

💡 Key Principles

  1. AI is a draft tool, not a publish tool. Every AI output needs human review before sharing.
  2. The author owns the quality. Using AI doesn't transfer responsibility to the recipient.
  3. Volume ≠ Value. A 40% increase in output with declining quality is a net negative.
  4. Machine review before human review. Don't waste human attention on problems AI can catch.
  5. If you can't summarize it, you don't understand it. And neither will your readers.

🔗 Related Resources

  • Stats are from WSJ, HBR articles.

Comments

Popular posts from this blog

AI-Powered Scrum for building software

Year in Review 2024

My spiritual journey - Dalai Lama