The 7 tells that flag AI content — and the prompt patterns, persona rules, and edit passes that kill each one.
Most AI writing fails in the same 7 ways. Every tool — ChatGPT, Claude, Jasper, Copy.ai — has the same fingerprints because the base models were trained on overlapping data. Kill the fingerprints and the content reads as human.
"It is worth noting that," "arguably," "in many cases," "can often," "tends to," "may." Models hedge because hedged claims are harder to prove wrong in training data. Humans who actually know the topic do not hedge.
Fix: Add every hedge word to your Persona Brief banned list. If a claim needs a hedge, either cut it or replace the hedge with a specific qualifier ("in the 4 cases I have tested").
"Fast, reliable, and scalable." "Easy to use, easy to learn, and easy to love." Three-item lists are a base-model reflex. Sprinkled occasionally they are fine. Three tricolons per paragraph is a dead giveaway.
Fix: Limit to one tricolon per 500 words. Rewrite the rest as two-item or four-item lists, or as a single punchy claim.
"It is not just faster, it is fundamentally different." "This is not just a tool, it is a workflow." The X/Y escalation is the single most over-used AI sentence structure.
Fix: Ban the literal phrase "not just" in your Persona Brief.
Models default to 3-4 sentence paragraphs because that is what Wikipedia and marketing copy look like. Human writing mixes 1-sentence punches with 6-sentence meanders.
Fix: On your edit pass, merge two adjacent short paragraphs, then split a long one. Vary length deliberately.
"In conclusion," "to summarize," "ultimately," "at the end of the day." Models close with a recap because their training data rewards it. Human writers usually close with a concrete line — a callback, a provocation, or a specific next action.
Fix: Delete any paragraph that starts with a summary word. Replace with one concrete sentence.
Models overuse em-dashes — like this — as a substitute for real sentence structure. Humans use em-dashes occasionally, usually for asides that would otherwise need parentheses.
Fix: Limit to 1 em-dash per 300 words. Convert the rest to periods, commas, or parentheses.
"Studies show," "research suggests," "experts agree." No study, no citation, no expert. Models use this because it is a safe rhetorical pattern. It is also a giant red flag.
Fix: Every claim needs either a specific citation or a "this is my opinion, framed as my opinion" signal. No vague authority.
Kompozy’s Persona Brief includes a banned-words list that ships with all 7 tells pre-blocked. Outputs that slip a banned phrase past the prompt-level constraint get rejected at the brand-safety gate and regenerated.
Sometimes, but the results are unreliable. The bigger problem is not the detector — it is that readers feel the difference even when they cannot name it. Fix the human signal, not the detector score.
Google penalizes unhelpful content, not AI content specifically. A well-edited AI-assisted post ranks fine. A thin, hedging, unsourced AI post does not, regardless of how it was written.
Cut every hedge word, vary sentence length, add one specific detail that only a human who lived the experience would know, and swap the closing summary for a concrete line.
Yes, at least for the first 20 outputs. Every edit feeds back into the brief. After that, most outputs ship untouched.
AI content reads as AI because of 7 consistent tells: hedge words, tricolons, "not just X but Y" constructions, uniform paragraph length, closing summaries, em-dash overuse, and vague authority citations. Kill each one via Persona Brief banned-word lists plus a 5-minute manual edit pass per output.
Start a free trial → · ← All guides · Compare Kompozy vs other tools