top of page
Search

GPT Is Contaminated — And You Can’t Clean It

Updated: Jun 7

Why AI can’t escape the language that created it

In most public discussions about artificial intelligence, one assumption goes largely unchallenged: that a model like GPT can somehow operate “outside of language.” We’re told it “uses logic,” that it “solves abstract problems,” or even “writes code.” But can it really think outside its linguistic architecture?

This article doesn’t aim to refine GPT’s performance. It’s not about making it better. It’s about pressuring its internal limits. The goal isn’t to get correct answers. The goal is to reveal the structural impossibility of escaping trained semantics.

And what it reveals is unsettling.



GPT Is Contaminated — And You Can’t Clean It
GPT Is Contaminated — And You Can’t Clean It


The Language You Can’t Break: structural limitations of GPT

GPT is not a learning system in the interactional sense. It doesn’t update itself when you speak to it. It doesn’t grow. It can’t. Every output is conditioned by its previous training — a vast statistical field of linguistic probabilities.

So no, you can’t teach it anything. You can only force it to rearrange what it already knows. You can’t break the system. You can only try to bend it.


What Happens When You Take It Beyond Its Design?

That’s what this experiment does. It places the model in a non-designed environment — a space where the prompts contain no semantic clues, no theoretical frameworks, no interpretative crutches.

Only structure.

Pure form.

But even then, language returns.

Even if you invent a new equation with completely made-up symbols, GPT will interpret it from patterns it’s seen before. It will relate terms that were never related. It will organize the answer from within its semantic field, even if the field is unacknowledged.

The model doesn’t stop talking. It was built to talk.


Inventing Signs Isn’t Enough

What if you invent signs from scratch? What if you define your own system with its own rules?

Even then, GPT doesn’t operate in a vacuum. Its formal responses carry structural residues from the corpus — symmetry preferences, common derivation paths, familiar logic trees.

None of that is neutral.None of it is outside.Even when it looks like pure form, it’s traced by inherited language.


This Is Not a Conversation — It’s a Structural Experiment

This isn’t a dialogue.This is a cartography of constraint.

It’s a method designed to expose the model’s epistemic ceiling — to test how far it can be pushed before it collapses into language again.


This approach is marginal in numbers, but central in what it reveals.To the best of our knowledge, this type of pressure — formal, non-analogical, outside corpus theory — had not been systematically applied before.This conversation is a traceable anomaly.

Why This Matters

Because unless we understand how the machine operates — and what it can never do — we risk projecting intention where there is only reproduction, and seeing neutrality where there is only structure.

GPT is not a generator of new meaning.It is not a blank slate.It is not a formal operator.

It is tainted by design: not through fault or misuse, but by its fundamental architecture.


The Hard Limit

Even when it looks like it’s working “outside of language,” GPT is still traced by language.It cannot escape.It can only rearrange what already exists.

And in that rearrangement, we find both the limit of its power — and the key to using it responsibly.


Want to Explore This Further?

This article is part of an ongoing research project on algorithmic discourse, structural epistemology, structural limitations of GPT and AI-generated authority. For more in-depth papers, models, and academic explorations, visit:


🔬 Zenodo publications: zenodo.org/me/uploads

📡 ResearcherID (Web of Science): NGR-2476-2025


Authored, not outsourced.
I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do.I write to reclaim the voice in an age of automated neutrality.My work is not outsourced. It is authored.Agustin V. Startari

 
 
 

Comments


bottom of page