Key takeaway
- data-pm-slice is a ProseMirror editor artifact, not a Google ranking signal. It appears on millions of legitimate websites with zero connection to AI.
- Google’s SynthID can watermark AI text, but detection capability and ranking policy are two separate things; there is no public evidence that SynthID is used to demote content in Search.
- The real culprit behind traffic drops is content quality, not the HTML tag. The tag is evidence of a sloppy workflow, not the cause of the penalty.
- Google’s policies are origin-agnostic; they target low-value, scaled content, whether it’s written by humans, AI, or both.
- “Humanization” tools are a waste of money. Scrubbing tags and swapping em dashes fixes nothing if the content itself lacks depth, originality, or expertise.
- AI content does rank on page one, but human-led content dominates the top spots because of quality, not because of a technical advantage.
- The winning strategy is to use AI as an accelerant, not a replacement, inject proprietary data, firsthand experience, and original research that AI cannot produce alone.
- Google’s own guidance has been consistent for years: quality, originality, and usefulness are what get rewarded, regardless of how the content was produced.
The SEO community is in the middle of an AI detection panic. One theory says Google catches AI content through a hidden HTML attribute data-pm-slice left behind when you paste from ChatGPT. A more sophisticated version points to real provenance infrastructure like SynthID, where watermarking happens at the token-probability level during text generation. And behind both, there’s a growing fear that Google will eventually use any detection signal it has to demote AI content in Search rankings.
These are three different claims with three different levels of evidence behind them. This post separates them.
I’m the founder of Quattr, a platform that helps brands use AI to generate and optimize content. I have a stake in this question, which is exactly why I care about getting the evidence right rather than amplifying panic.
The Tag Is Real. The Evidence for a Penalty Is Not.
The data-pm-slice attribute absolutely shows up in the source code of published web pages. That part is an observable, reproducible fact. Here’s what it actually is: ProseMirror is an open-source rich-text editing toolkit used widely across the software industry, in CMS platforms, enterprise documentation tools, newsroom publishing systems, and yes, ChatGPT’s web interface.
When you copy formatted text from any ProseMirror-based editor, your clipboard captures an HTML payload. If your CMS doesn’t sanitize incoming HTML, that attribute rides along into your published page. But that’s all you can conclude, anyone pasting from any ProseMirror-powered interface, AI or not, would leave the same artifact.
No good evidence that Google uses this as a ranking signal
Google’s public documentation does not identify data-pm-slice as a ranking signal. Custom data-* attributes exist in the HTML5 spec specifically so developers can store arbitrary application data on DOM elements. Using data-pm-slice it as a demotion signal would create obvious false-positive risk across many legitimate publishing workflows that have nothing to do with AI.
But What About Model-Level Watermarking?
The stronger argument is about Google’s SynthID Text, a watermarking system developed by DeepMind that operates at the token-probability level during text generation. SynthID adjusts the logits, the mathematical weights that determine which word comes next, in a way that’s imperceptible to readers but statistically detectable by purpose-built tools.
Detection capability and ranking policy are different questions. Google can detect some AI-generated text at the provenance layer, and chooses not to use that as a blanket demotion signal, because their stated policy evaluates content on its merits, not its origin.
The Real Trap: Correlation Masquerading as Causation
A webmaster publishes a batch of AI-generated content. Traffic drops. They inspect their source code, discover data-pm-slice, and conclude they’ve found the cause. But the causation runs the other way. The tag didn’t cause the traffic loss. The tag is evidence of the workflow that caused the traffic loss, someone copying content with little or no editorial cleanup.
The resulting content is what Google’s systems are designed to evaluate. Not the HTML artifact, the quality of what was published.
What Google Has Actually Said About AI Content
Google’s February 2023 guidance was unambiguous: ranking systems reward original, high-quality content demonstrating E-E-A-T, regardless of how it’s produced.

The March 2024 Core Update introduced “Scaled Content Abuse,” targeting large volumes of content made to manipulate rankings. The policy is origin-agnostic, it applies whether content is produced by automation, humans, or both.

The Data: AI Content Ranks. Just Not as Well.
A Semrush study from November 2025 analyzed the top 10 Google results across 20,000 keywords. AI-generated content clearly ranks on page one, including in top positions, though human-written content dominates the highest spots by a wide margin. If a technical footprint penalty existed, AI content would be virtually absent. That’s not what the data shows.
Why “Humanization” Workflows Miss the Point
Teams are pouring resources into “humanization” workflows, running content through AI bypasser tools that strip formatting, replace punctuation, and introduce deliberate errors. All to evade a detection mechanism that no public evidence suggests exists. If your content provides zero information gain, stripping data-pm-slice Or swapping em dashes won’t help.
What Actually Moves the Needle
Use AI as an accelerant, not a replacement. 87% of high-performing SEO teams describe their content as human-created or heavily human-led. Only 19% believe AI improves content quality; the primary benefit is speed.
Inject what AI cannot produce on its own. Proprietary data. Original research. Firsthand experience. Contrarian analysis grounded in real outcomes.
Clean up your pipeline for the right reasons. Sanitize your HTML because clean markup is good practice for accessibility and performance, not because data-pm-slice triggers a penalty.
One Question to Guide Every Content Decision
The data-pm-slice The theory follows a pattern that the SEO industry has repeated for over a decade. When Panda hit, everyone obsessed over word counts. When Penguin hit, it was anchor text ratios. Now, the search is on for a definitive technical marker, and a ProseMirror attribute fits the narrative.
The real question isn’t whether Google can detect your AI content. It’s whether your content, regardless of how it was produced, is worth ranking. Focus there.