Skip to content

tn819/slop-meter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🤖 Slop Meter

A Chrome extension that highlights AI-generated content on web pages so you know exactly who wrote what — and who just let ChatGPT write it for them.


What is "slop"?

You know what slop is. You've read it. You've probably accidentally written it.

It's the LinkedIn post that starts with "In today's rapidly evolving landscape..." It's the blog post that tells you it's going to tell you what it's about to tell you. It's the comment that says "Absolutely! There are several key factors to consider" and then lists five bullet points of increasing irrelevance. It's the cover letter that calls itself "passionate about leveraging synergies" without a trace of shame.

Slop is what happens when a person outsources their thinking to an AI and doesn't bother to check if the AI said anything worth reading.

This extension finds it and lights it up in red.


How it works

Four heuristic signals, each contributing to a 0–100 slop score:

1. Burstiness

Real humans write in bursts. Short sentence. Then a long one that ambles along, picks up some context, maybe makes a point or two before arriving somewhere you weren't entirely expecting. Then one word. Boom.

AI writes at a constant, metronomic 15–18 words per sentence, every single sentence, like a metronome that's been to business school. We measure the standard deviation of sentence lengths. Low variance = high slop.

2. Banned Word Density

There is a list of words that have been so thoroughly strip-mined by AI-generated content that they have lost all meaning. We count them.

The full rogues' gallery: delve, nuanced, groundbreaking, transformative, testament, pivotal, leverage, seamless, holistic, synergize, foster, cultivate, robust, comprehensive, actionable, impactful, paradigm, ecosystem, disruptive, world-class, state-of-the-art, and about thirty of their friends.

If more than 5% of your post is on this list, your score goes to 100. Immediately. No appeals.

3. Filler Phrases

Twenty regex patterns for the exact phrases that signal an AI has taken the wheel and is gently steering toward nothing:

  • "In today's rapidly evolving landscape..." (where? what landscape? a literal landscape?)
  • "It is important to note that..." (why are you noting your note?)
  • "It goes without saying..." (then don't say it)
  • "In conclusion..." (we know, it's the last paragraph)
  • "This article will explore..." (does it though?)
  • "Let me know if you have any questions" (you're a LinkedIn post, you can't receive questions)

Each match adds 25 points. Four filler phrases is a perfect 100. This is not a high bar.

4. Structural Tells

Two patterns that AIs love and humans only use under duress:

Bullet overload. A wall of five perfectly parallel bullet points where each one is "adjective noun through prepositional phrase." Humans write prose. They use bullets when they have an actual list. AI uses bullets to fake the appearance of structure.

Parallel sentence openers. "The platform enables. The system provides. The tool streamlines. The solution enhances." Three or more sentences starting with the same word is a tell. Real people run out of sentence variety and do something different. AI is constitutionally incapable of doing something different.


The visual output

Score What you see What it means
0–39 Nothing Probably a human. Or a very good AI. Or a bad human.
40–69 🟡 Yellow-orange left border Suspicious. Possibly written by someone who uses AI as a "starting point" and calls it a draft.
70–100 🔴 Red left border + score badge This was written by a robot. The human's contribution was clicking "generate" and then "post".

The popup icon shows a page-level average — how sloppy is this whole page, as a number.


Installation (no store, raw dev install)

This extension isn't on the Chrome Web Store yet. Install it like a person who knows what they're doing:

  1. Clone the repo:

    git clone https://github.com/tn819/slop-meter.git
  2. Open Chrome and go to chrome://extensions

  3. Enable Developer mode (toggle, top right)

  4. Click Load unpacked

  5. Select the slop-meter folder

  6. Navigate to LinkedIn. Weep.


Where it works

Site What gets scored
LinkedIn Feed posts, comments
Reddit Post bodies, comments (new and old Reddit)
Twitter / X Individual tweets
Substack Post content
Medium Article paragraphs
Hacker News Comments
Everything else Falls back to <p> tags inside <article>, <main>, <section> — catches most content-farm blogs

New content that loads via infinite scroll or SPA navigation is re-scanned automatically. The extension watches for DOM mutations and re-runs 800ms after things settle down.


Scores from the test suite (actual numbers)

These are real scores from the fixture tests. They are not cherry-picked.

Sample Source Score
"Okay so basically your fridge has a gas inside it..." HC3 Human (Reddit ELI5) 10
"We shipped our payment redesign last week. Three months of arguing about Stripe or Braintree..." HC3 Human 10
"Honestly depends what you're optimizing for..." HC3 Human 23
"A refrigerator operates through a thermodynamic cycle involving refrigerant gas..." ChatGPT (same question) 55
"I'm thrilled to share that in today's rapidly evolving landscape, leveraging nuanced, transformative strategies..." LinkedIn Slop 100
"In today's rapidly evolving digital landscape, it is important to note that businesses must adapt..." Content Farm 100

The LinkedIn post and the content farm both maxed out. There was no partial credit.


What it won't catch

  • Good AI writing — if someone actually edits and improves the output, the signals don't fire. The extension is measuring slop, not AI. Good AI-assisted writing that sounds human will score low. That's fine. We're not trying to punish the tool, just the laziness.

  • Bad human writing — a human can also write terrible, jargon-stuffed corporate prose. This extension will correctly flag it. That's not a bug.

  • Very short text — the burstiness signal requires at least 3 sentences. Short posts score only on banned words and filler phrases.

  • Non-English content — the banned word list and regex patterns are English-only. Everything else will score near zero. Probably deserved, honestly.


Architecture

slop-meter/
├── src/
│   ├── scorer.js      ← The brain. Pure functions, fully tested.
│   ├── selectors.js   ← Finds content chunks per site.
│   ├── highlighter.js ← Applies CSS classes to DOM elements.
│   ├── content.js     ← Orchestrates everything. Runs on every page.
│   ├── background.js  ← Minimal service worker. Logs one line.
│   └── highlight.css  ← The actual red and yellow you see.
├── popup/
│   ├── popup.html     ← 220px of honest UI.
│   ├── popup.js       ← Reads from chrome.storage, displays score.
│   └── popup.css      ← Score ring, legend, nothing fancy.
├── tests/
│   ├── fixtures/
│   │   ├── human_samples.js   ← 5 real human texts from HC3 dataset
│   │   └── ai_samples.js      ← 5 AI texts, including one that scores 100
│   ├── scorer.burstiness.test.js
│   ├── scorer.banned.test.js
│   ├── scorer.fillers.test.js
│   ├── scorer.structural.test.js
│   ├── scorer.combined.test.js  ← End-to-end fixture validation
│   ├── selectors.test.js
│   └── highlighter.test.js
└── manifest.json      ← Manifest V3. No build step. No bundler. Just files.

No webpack. No TypeScript. No build step. The extension folder is the extension. Load it and it works.


Running the tests

npm install
npm test

35 tests. 7 suites. All green.

The combined scorer test runs against 10 labeled ground-truth samples — 5 human, 5 AI — and asserts that human samples score under 40 and AI samples score over 50. This is the meaningful test. The rest are unit tests that make sure the individual signals work in isolation.


The test dataset

The fixture texts are derived from the HC3 dataset (Human ChatGPT Comparison Corpus) — a collection of 27,000 questions answered by both humans and ChatGPT, covering Reddit ELI5, medical, legal, and financial topics. It's the most direct comparison dataset available: same question, two answers, labeled.

The HC3 Reddit ELI5 split is especially good for this purpose because the human answers are genuinely casual — short sentences, personal asides, a healthy disrespect for formal structure. The ChatGPT answers are, predictably, uniform, complete, and a little hollow.


Contributing

PRs welcome. A few things that would make this better:

  • More site selectors — if a site you use isn't in the list, add it. The selector format is simple.
  • More banned words — the list comes from SicariusSicariiStuff/SLOP_Detector and the humanizer skill. If you spot a word that's been permanently ruined by AI, open an issue.
  • More filler patterns — there are infinite variations. "Let's dive in!" is a crime that currently goes undetected.
  • Better icons — the current icons are red squares generated by 8 lines of Python. They are placeholder icons in the truest sense of the word. An actual designer could probably do better.

License

MIT. Do what you want with it. Just don't use AI to write the commit messages.


Built because the internet is drowning in beige.

About

Chrome extension that highlights AI-generated slop on web pages

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors