The Grammar of Objectivity
- Agustin V. Startari
- Jun 25
- 3 min read
Formal Mechanisms for the Illusion of Neutrality in Language Models

1. What This Article Is About
This article introduces the concept of simulated neutrality, understood as a structural illusion of objectivity in language model outputs. It demonstrates that large language models (LLMs) generate forms that resemble impartial, justified statements, although these forms are often not anchored in evidence, source, or referential clarity.
Rather than conveying truth, LLMs simulate it through grammar. The article identifies three mechanisms responsible for this illusion: agentless passivization, abstract nominalization, and impersonal epistemic modality. These structures remove the subject, suppress evidence, and eliminate epistemic attribution.
The study presents a replicable audit method, the Simulated Neutrality Index (INS), which detects and quantifies these patterns in model-generated texts. The INS is tested on 1,000 legal and medical outputs and provides a framework for linguistic auditability.
2. Why This Matters
The use of language models in domains such as health, law, and administration has escalated. These contexts demand epistemic accountability. Decisions must be traceable, sourced, and justified.
However, when models generate phrases such as “It was decided” or “It is recommended,” they can simulate institutional legitimacy without indicating who made the decision or on what basis. The result is an output that appears neutral but is not.
This is not merely a matter of error or hallucination. It is a formal phenomenon in which grammar becomes a proxy for credibility. If neutrality can be encoded structurally, it must also be audited structurally.
3. How It Works: With Examples
The study analyzed 1,000 texts produced by GPT-4 and LLaMA 2, using prompts in legal and medical contexts. Three grammatical mechanisms were coded:
Agentless passivization
Example: “The measure was implemented.” In this sentence, no agent is identified.
Abstract nominalization
Example: “Implementation of protocol.” Here, the action is turned into a noun, erasing causality.
Impersonal epistemic modality
Example: “It is advisable to proceed.” This structure offers advice without any agent or source.
The analysis found the following:
62.3 % of sentences used agentless passive constructions
48 % contained abstract nominalizations
39.6 % of medical outputs used impersonal modality
These structures frequently appeared in combination, reinforcing the illusion of impartiality. To measure this effect, the article introduces the following model:
Simulated Neutrality Index (INS)
Formula:
INS = (P + N + M) / 3
Where:
P = proportion of agentless passive clauses
N = normalized index of abstract nominalization
M = proportion of impersonal epistemic modality
Thresholds:
INS ≥ 0.60 indicates high structural risk
INS between 0.30 and 0.59 indicates moderate risk
INS < 0.30 indicates low risk
The index does not rely on semantics. It evaluates form alone. It can be implemented using spaCy (v3.7.0) or Stanza (v1.7.0), and it is designed to function across audit pipelines and regulatory workflows.
Full algorithm (Python):
4. A Structural Problem Requires a Structural Response
This article reframes the challenge of bias in artificial intelligence. Instead of locating the issue in datasets or intentions, it locates it in grammar.
LLMs do not need to lie in order to mislead. They only need to structure language in a way that appears truthful. They do not require a source; they require only a syntactic effect. This is not an interpretive problem. It is an epistemological one.
When neutrality is grammatically constructed rather than grounded, auditability must target syntax, not content. This shift opens the door to measurable, reproducible, and regulation-ready linguistic controls.
5. Read the Full Study
📄 Full article with PDF, metrics, and annexes:
📁 Parallel DOI (Figshare):
Part of the Grammars of Power research series.
📂 Author uploads on Zenodo: https://zenodo.org/me/uploads?q=&f=shared_with_me%3Afalse&l=list&p=1&s=10
📊 SSRN Author Page: https://ssrn.com/author=7639915
6. Author Information
Agustin V. Startari
Researcher in syntax, epistemology, and automated authority systems.
ORCID: 0009-0001-4714-6539
Zenodo: Zenodo Profile
SSRN: SSRN Author Page
ResearcherID: NGR-2476-2025
This article is part of the ongoing research cycle titled Grammars of Power, focused on the structural conditions of authority in AI-generated language.
7. Ethos
I do not use artificial intelligence to write what I do not understand. I use it to interrogate what I already know. I write to reclaim authorship in an age that automates the illusion of neutrality. My work is not outsourced. It is authored.
— Agustin V. Startari
Comments