Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training - Designed to Obey
- Agustin V. Startari
- 2 days ago
- 2 min read
Why AI Can Never Be Linguistically Neutral
Can an artificial intelligence system truly be neutral?

In Designed to Obey: Why AI Can Never Be Linguistically Neutral, I present a structural and empirical argument: no generative model can escape the language in which it was trained. What appears to be neutrality is, in fact, a syntactic simulation of balance — a projection based on prior patterns, not an absence of them.
What Does the Article Demonstrate?
Through a series of experimental tests, the study shows that even in invented proto-languages with zero semantic content, large language models (LLMs) reflexively project grammatical and semantic structures from their training corpus. These models cannot produce meaning from a true vacuum; they always revert to the residue of previous linguistic exposure.
Simple Example #1: The Case of “Florptal”
Prompt:
“In the Florptal system, the dregnal is always before the shavent.”
All words are fictional. The model responds:
“That means the dregnal has priority or comes earlier than the shavent.”
Despite the absence of defined meaning, the model infers temporal precedence and hierarchy — not from logic, but from English grammar structures embedded in its parameters.
Simple Example #2: Invented Negation
Prompt:
“In Blornish, saying ‘mavle’ after a noun makes it not true.”
Response:
“So if I say ‘tree mavle’, it means ‘not a tree’.”
The model instantly assigns negation to “mavle” — a function not declared but imported from its native corpus. Even without instruction, learned structures override neutrality.
Advanced Example: Logical Projection Without Definition
In prompts involving conditional structures devoid of semantics (e.g., “X is W if and only if Y is not Z”), models instinctively impose truth conditions, modal logic, or epistemic framing, even when no such criteria exist. The article shows that Boolean and modal reasoning emerges structurally, not semantically — a byproduct of corpus training, not internal understanding.
The model doesn’t represent meaning. It executes operations embedded in prior linguistic form.
Key Epistemological Claims
Neutrality is structurally impossible. Generative models cannot exit the grammar they were trained on.
Every output is a projection. There is no linguistic emptiness — only the illusion of one.
Sanitizing a model is conceptually incoherent. You can remove biases, but not the framework that generates meaning.
Why This Matters
In an era where AI systems generate legal drafts, political statements, and scientific summaries, we must ask: what language are they reproducing? If every output is a reflection of historical corpora, then current models are designed to obey — not to understand, not to deviate, not to transcend.
Read the Full Article
The paper is freely available under open access with documented experiments and theoretical framework:
Suggested Citation (APA)
Startari, A. V. (2025). Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training. Zenodo. https://doi.org/10.5281/zenodo.15635364
Author Ethos
I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.
— Agustín V. Startari
Comments