Explore Research
Peer-reviewed articles, theoretical proposals, and interdisciplinary papers.
Articles & Papers
This section presents the core academic work of Agustín V. Startari, focused on the structures of language, legitimacy, and authority. It includes published articles, working papers, and formal research projects across linguistics, history, epistemology, and artificial intelligence.
Executable Power: Syntax as Infrastructure in Predictive Societies
Year: 2025
Description:
This article defines executable power as a form of authority enacted through syntactic structures rather than subjects, narratives, or symbolic legitimacy. Grounded in formal grammars and deterministic execution conditions (e.g., LTL, Solidity require clauses), it analyzes three empirical systems—LLMs, TAPs, and smart contracts—that perform irreversible actions upon formal triggers. All cases demonstrated a 100 % execution rate with no human override and an average latency of 0.63 ± 0.17 s. The study frames this syntactic operativity as a rupture from classical theories of power, aligning instead with infrastructural and executional governance models. Verification is outcome-based and replicable, emphasizing structural sovereignty over interpretative authority.
Canonical DOI: 10.5281/zenodo.15754714
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29424524
Abstract:
This article introduces the concept of executable power as a structural form of authority that does not rely on subjects, narratives, or symbolic legitimacy, but on the direct operativity of syntactic structures. Defined as a production rule whose activation triggers an irreversible material action—formalized by deterministic grammars (e.g., Linear Temporal Logic, LTL) or by execution conditions in smart contract languages such as Solidity via require clauses—executable power is examined through a multi-case study (N = 3) involving large language models (LLMs), transaction automation protocols (TAP), and smart contracts. Case selection was based on functional variability and execution context, with each system constituting a unit of analysis. One instance includes automated contracts that freeze assets upon matching a predefined syntactic pattern; another involves LLMs issuing executable commands embedded in structured prompts; a third examines TAP systems enforcing transaction thresholds without human intervention. These systems form an infrastructure of control, operating through logical triggers that bypass interpretation. Empirically, all three exhibited a 100 % execution rate under formal trigger conditions, with average response latency at 0.63 ± 0.17 seconds and no recorded human override in controlled environments. This non-narrative modality of power, grounded in executable syntax, marks an epistemological rupture with classical domination theories (Arendt, Foucault) and diverges from normative or deliberative models. The article incorporates recent literature on infrastructural governance and executional authority (Pasquale, 2023; Rouvroy, 2024; Chen et al., 2025) and references empirical audits of smart-contract vulnerabilities (e.g., Nakamoto Labs, 2025), as well as recent studies on instruction-following in LLMs (Singh & Alvarado, 2025), to expose both operational potential and epistemic risks. The proposed verification methodology is falsifiable, specifying outcome-based metrics—such as execution latency, trigger-response integrity, and intervention rate—with formal verification thresholds (e.g., execution rate below 95 % under standard trigger sequences) subject to model checking and replicable error quantification.
DOI: https://doi.org/10.5281/zenodo.15754714
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29424524 and Pending SSRN ID to be assigned. ETA: Q3 2025.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Executable Power: Syntax as Infrastructure in Predictive Societies
Year: 2025
Description:
This article defines executable power as a form of authority enacted through syntactic structures rather than subjects, narratives, or symbolic legitimacy. Grounded in formal grammars and deterministic execution conditions (e.g., LTL, Solidity require clauses), it analyzes three empirical systems—LLMs, TAPs, and smart contracts—that perform irreversible actions upon formal triggers. All cases demonstrated a 100 % execution rate with no human override and an average latency of 0.63 ± 0.17 s. The study frames this syntactic operativity as a rupture from classical theories of power, aligning instead with infrastructural and executional governance models. Verification is outcome-based and replicable, emphasizing structural sovereignty over interpretative authority.
Canonical DOI: 10.5281/zenodo.15754714
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29424524
Abstract:
This article introduces the concept of executable power as a structural form of authority that does not rely on subjects, narratives, or symbolic legitimacy, but on the direct operativity of syntactic structures. Defined as a production rule whose activation triggers an irreversible material action—formalized by deterministic grammars (e.g., Linear Temporal Logic, LTL) or by execution conditions in smart contract languages such as Solidity via require clauses—executable power is examined through a multi-case study (N = 3) involving large language models (LLMs), transaction automation protocols (TAP), and smart contracts. Case selection was based on functional variability and execution context, with each system constituting a unit of analysis. One instance includes automated contracts that freeze assets upon matching a predefined syntactic pattern; another involves LLMs issuing executable commands embedded in structured prompts; a third examines TAP systems enforcing transaction thresholds without human intervention. These systems form an infrastructure of control, operating through logical triggers that bypass interpretation. Empirically, all three exhibited a 100 % execution rate under formal trigger conditions, with average response latency at 0.63 ± 0.17 seconds and no recorded human override in controlled environments. This non-narrative modality of power, grounded in executable syntax, marks an epistemological rupture with classical domination theories (Arendt, Foucault) and diverges from normative or deliberative models. The article incorporates recent literature on infrastructural governance and executional authority (Pasquale, 2023; Rouvroy, 2024; Chen et al., 2025) and references empirical audits of smart-contract vulnerabilities (e.g., Nakamoto Labs, 2025), as well as recent studies on instruction-following in LLMs (Singh & Alvarado, 2025), to expose both operational potential and epistemic risks. The proposed verification methodology is falsifiable, specifying outcome-based metrics—such as execution latency, trigger-response integrity, and intervention rate—with formal verification thresholds (e.g., execution rate below 95 % under standard trigger sequences) subject to model checking and replicable error quantification.
DOI: https://doi.org/10.5281/zenodo.15754714
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29424524 and Pending SSRN ID to be assigned. ETA: Q3 2025.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Ethos Without Source: Algorithmic Identity and the Simulation of Credibility
Year: 2025
Description:
This article introduces the concept of synthetic ethos—algorithmically generated credibility without a human subject or verifiable source. Through analysis of 1,500 outputs from GPT-4, Claude, and Gemini across healthcare, law, and education, it shows how language models simulate authority through structural patterns like passive voice, modality, and technical jargon. The paper proposes a falsifiable framework for detecting synthetic ethos and argues for regulatory approaches based on linguistic structures rather than semantic content.
Canonical DOI: 10.5281/zenodo.15700412
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29367062
Abstract:
Generative language models increasingly produce texts that simulate authority without a verifiable author or institutional grounding. This paper introduces synthetic ethos: the appearance of credibility constructed by algorithms trained to replicate human-like discourse without any connection to expertise, accountability, or source traceability. Such simulations raise critical risks in high-stakes domains including healthcare, law, and education.
We analyze 1500 AI-generated texts produced by large-scale models such as GPT-4, collected from public datasets and benchmark repositories. Using discourse analysis and pattern-based structural classification, we identify recurring linguistic features,such as depersonalized tone, adaptive register, and unreferenced assertions,that collectively produce the illusion of credible voice. In healthcare, for instance, generative models produce diagnostic language without citing medical sources, risking patient misguidance. In legal context, generated recommendations mimic normative authority while lacking any basis in legislation or case law. In education, synthetic essays simulate scholarly argumentation without verifiable references.
Our findings demonstrate that synthetic ethos is not an accidental artifact, but an engineered outcome of training objectives aligned with persuasive fluency. We argue that detecting such algorithmic credibility is essential for ethical and epistemically responsible AI deployment. To this end, we propose technical standards for evaluating source traceability and discourse consistency in generative outputs. These metrics can inform regulatory frameworks in AI governance, enabling oversight mechanisms that protect users from misleading forms of simulated authority and mitigate long-term erosion of public trust in institutional knowledge.
Original article DOI: https://doi.org/10.5281/zenodo.15700412
Additional repository mirrors:
– Figshare: https://doi.org/10.6084/m9.figshare.29367062
– SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5313317
.
.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Grammar Without Judgment: Eliminability of Ethical Trace in Syntactic Execution
Year: 2025
DOI: https://doi.org/10.5281/zenodo.15783365
Description:
This paper demonstrates that ethical judgment, modeled as a syntactic node [E], can be structurally deleted through the rule δ:[E] → ∅ within a bounded derivational window, resulting in grammars that execute without moral traceability.
DOI: https://doi.org/10.5281/zenodo.15783365
This work is also published with DOI reference in Figshare https://doi.org/ 10.6084/m9.figshare.29447060 and Pending SSRN ID to be assigned. ETA: Q3 2025.
Abstract:
This article advances a new theoretical hypothesis: a regla compilada, defined as a Type-0 production in the Chomsky hierarchy (Chomsky 1965, 101-103; Montague 1974, 55-57), can eliminate the ethical trace embedded in syntactic operations without resorting to semantic suppression. Grounded in the notion of the soberano ejecutable (Startari 2025, 12-16) and located within the Executable Power canon (Startari 2025, DOI 10.5281/zenodo.15754714, 34-36), the paper argues that ethical judgment, treated here as a syntactically traceable node, can be structurally excised through a deletion rule applied during derivation. Existing research in algorithmic alignment and computational ethics (Anderson 2024, 89-92; Floridi 2023, 143-147) has not addressed the strictly syntactic eliminability of moral judgment, therefore this proposal establishes a novel logical vector toward operational grammars that function without ethical residues..
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
TLOC – The Irreducibility of Structural Obedience in Generative Models
Year: 2025
Description:
This article introduces the Theorem of the Limit of Conditional Obedience Verification (TLOC), which demonstrates that obedience in generative models—such as large language models (LLMs)—cannot be structurally verified unless the internal activation trajectory entails the condition being obeyed. The TLOC defines a formal limit applicable to all black-box architectures based on statistical approximation P(y∣x), in which latent activation π(x) is opaque and symbolic evaluation of C(x) is absent.
The theorem is formally falsifiable but currently holds across all known transformer-based generative models as of 2025. The article builds upon and formalizes prior structural hypotheses developed by A. V. Startari concerning syntactic obedience, algorithmic simulation, and epistemic opacity.
Canonical DOI: 10.5281/zenodo.15675710
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29263493
Abstract:
Theorem of the Limit of Conditional Obedience Verification (TLOC): Structural Non-Verifiability in Generative Models
This article presents the formal demonstration of a structural limit in contemporary generative models: the impossibility of verifying whether a system has internally evaluated a condition before producing an output that appears to comply with it. The theorem (TLOC) shows that in architecture based on statistical inference, such as large language models (LLMs), obedience cannot be distinguished from simulation if the latent trajectory π(x) lacks symbolic access and does not entail the condition C(x). This structural opacity renders ethical, legal, or procedural compliance unverifiable. The article defines the TLOC as a negative operational theorem, falsifiable only under conditions where internal logic is traceable. It concludes that current LLMs can simulate normativity but cannot prove conditional obedience. The TLOC thus formalizes the structural boundary previously developed by Startari in works on syntactic authority, simulation of judgment, and algorithmic colonization of time.
Redundant archive copy: https://doi.org/10.6084/m9.figshare.29329184 — Maintained for structural traceability and preservation of citation continuity.
.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models
Year: 2025
Description:
This article theorizes a structural transition from Large Language Models (LLMs) to Language Reasoning Models (LRMs), arguing that authority in artificial systems no longer depends on syntax or representation, but on executional structure. It introduces the concept of post-semantic and post-intentional legitimacy, where models resolve tasks without meaning, intention, or reference. Legitimacy is redefined as the internal sufficiency of structurally constrained operations. This work is part of the Grammars of Power series and forms the theoretical foundation for a broader post-representational epistemology of AI.
Canonical DOI: 10.5281/zenodo.15635364
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29286362
Abstract:
This article formulates a structural transition from Large Language Models (LLMs) to Language Reasoning Models (LRMs), redefining authority in artificial systems. While LLMs operated under syntactic authority without execution, producing fluent but functionally passive outputs, LRMs establish functional authority without agency. These models do not intend, interpret, or know. They instantiate procedural trajectories that resolve internally, without reference, meaning, or epistemic grounding. This marks the onset of a post-representational regime, where outputs are structurally valid not because they correspond to reality, but because they complete operations encoded in the architecture. Neutrality, previously a statistical illusion tied to training data, becomes a structural simulation of rationality, governed by constraint, not intention. The model does not speak. It acts. It does not signify. It computes. Authority no longer obeys form, it executes function.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training
Year: 2025
Description:This article explores the structural impossibility of achieving semantic neutrality in large language models (LLMs), focusing on GPT as a case study. It demonstrates that, even under rigorously controlled prompting scenarios—such as invented symbolic codes or syntactic proto-languages—GPT systematically reactivates latent semantic patterns derived from its training data. Building on previous research into syntactic authority, post-referential logic, and algorithmic discourse (Startari, 2025), the article presents empirical experiments designed to isolate the model from recognizable linguistic content. These experiments consistently show GPT’s inability to produce or interpret structure without semantic interference. The study introduces a falsifiable framework for identifying and measuring semantic contamination in generative systems, arguing that such contamination is structurally inevitable within probabilistic language architectures. The results dispute common assumptions about user control and formal neutrality, concluding that generative models like GPT are non-neutral by design.
Abstract:
This article investigates the structural impossibility of semantic neutrality in large language models (LLMs), using GPT as a test subject. It argues that even under strictly formal prompting conditions—such as invented symbolic systems or syntactic proto-languages—GPT reactivates latent semantic structures drawn from its training corpus. The analysis builds upon prior work on syntactic authority, post-referential logic, and algorithmic discourse (Startari, 2025), and introduces empirical tests designed to isolate the model from known linguistic content. These tests demonstrate GPT’s consistent failure to interpret or generate structure without semantic interference. The study proposes a falsifiable framework to define and detect semantic contamination in generative systems, asserting that such contamination is not incidental but intrinsic to the architecture of probabilistic language models. The findings challenge prevailing narratives of user-driven interactivity and formal control, establishing that GPT—and similar systems—are non-neutral by design.🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
AI and Syntactic Sovereignty: How Artificial Language Structures Legitimize Non-Human Authority
Year: 2025
DOI: 10.5281/zenodo.15395917
Description:
This theoretical paper introduces the concept of Syntactic Sovereignty, a structural framework for understanding how authority is produced in artificial language systems. Drawing on linguistics, epistemology, and critical theory, the paper argues that in post-human contexts, epistemic legitimacy no longer relies on truth, intention, or subjectivity, but on the formal properties of language—its syntax, modality, and institutional simulation. The work synthesizes prior developments on synthetic authority and power grammars to propose a general theory: in algorithmic discourse, structure governs. This is a foundational contribution to the fields of AI philosophy, computational linguistics, and post-human epistemology..
Abstract:
This article introduces the theory of Syntactic Sovereignty to explain how artificial intelligence systems, particularly language models, generate perceptions of epistemic authority without subjectivity, intentionality, or content-based legitimacy. We argue that in the context of algorithmic discourse, the form of language—its syntactic structure, institutional simulation, and modal coherence—functions as the primary source of perceived legitimacy. Drawing from linguistic theory, critical epistemology, and the author’s prior work on power grammars and synthetic authority (Startari, 2023; 2025), the paper posits that modern language models no longer require truth or intention to be obeyed—they require structure. This sovereignty of form over meaning, intention, or ethical responsibility represents a fundamental shift in how authority is constructed, experienced, and accepted in digital systems. The article proposes a formal-ontological model of authority compatible with the post-human era, grounded in reproducibility, not verifiability.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
The Illusion of Objectivity: How Language Constructs Authority
Year: 2025
DOI: 10.5281/zenodo.15395917
Description:
This article challenges the foundational assumption that objectivity is an empirical stance. It demonstrates how syntactic structures—particularly depersonalized grammar—produce the rhetorical effect of neutrality, allowing institutional discourses to conceal their political and ideological origins.
Abstract:
Objectivity is not a neutral condition of discourse, but a linguistic construction. This article examines how bureaucratic language, scientific papers, and institutional communications employ depersonalized syntax (such as the passive voice and nominalizations) to simulate neutrality. These structures displace agency, stabilize meaning, and create the illusion that legitimacy derives from formality rather than from political intention. The analysis bridges critical linguistics with epistemology and authority theory.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
Artificial Intelligence and Synthetic Authority: An Impersonal Grammar of Power
Year: 2025
DOI: 10.5281/zenodo.15442928
Description:
This publication addresses how AI systems generate institutional authority without epistemic substance. It introduces the concept of “synthetic legitimacy” and examines how impersonal grammar, citation loops, and predictive text function as instruments of structural authority in algorithmic environments.
Abstract:
As AI-generated language increasingly mediates knowledge production, this chapter investigates the discursive structures that allow artificial systems to project authority. It defines “synthetic authority” as a grammar of legitimacy that operates without human referents, constructed through repetition, citation proxies, and predictive alignment. The work draws on post-referential theory and critical linguistics to critique the epistemological implications of impersonal discourse in science, governance, and education.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
The Passive Voice in Artificial Intelligence Language: Algorithmic Neutrality and the Disappearance of Agency
Year: 2025
DOI: 10.5281/zenodo.15464765
Description:
This chapter explores the syntactic foundations of power in AI-generated language. It focuses on how the passive voice is used as a mechanism of depersonalization, simulating neutrality and obscuring responsibility in algorithmic discourse. The analysis offers a linguistic critique of authority construction in artificial systems and challenges the myth of objectivity in machine-generated texts.
Abstract:
In contemporary artificial intelligence systems, the passive voice is not a stylistic residue but a structural feature that enables the disappearance of agency. This chapter examines how algorithmic language employs grammatical neutrality to obscure authorship, responsibility, and intent—thereby reinforcing institutional legitimacy without accountability. The study draws from computational linguistics, epistemology, and power theory to expose how AI reconfigures the conditions under which discourse is perceived as legitimate.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
AI and the Structural Autonomy of Sense A Theory of Post-Referential Operative Representation
Year: 2025
DOI: 10.5281/zenodo.15519613
Description:
This paper introduces the theory of Structural Autonomy of Sense, a novel framework to conceptualize representations that exert real-world effects without empirical referentiality. In contrast to traditional models based on truth, belief, or social consensus, the proposed theory defines a class of representations whose legitimacy derives solely from internal coherence, executable structure, and systemic operability. These are called post-referential operative representations.
Abstract:
Through a formal model and empirical analysis — including predictive algorithms in justice, black-box AI diagnostics, credit scoring systems, and synthetic media — the paper demonstrates how such representations function autonomously within closed systems to produce legal, medical, economic, and epistemic consequences. This marks a paradigmatic shift in epistemology and ontology: from referential truth to functional structure as the source of authority.
The work critically distinguishes itself from prior theories (Baudrillard, Berger & Luckmann, Searle, Luhmann, Lyotard) by offering a syntactic-operational foundation for legitimacy, independent of symbolic meaning or intersubjective validation.
.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
Ethos and Artificial Intelligence: The Disappearance of the Subject in Algorithmic Legitimacy
Year: 2025
DOI: 10.5281/zenodo.15489309
Description:
This article explores how artificial intelligence disrupts traditional rhetorical models by erasing the subject behind the discourse. It positions the enunciative "ethos" as a disappearing anchor of legitimacy in AI-generated language, replacing personal accountability with algorithmic impersonality. Published as part of the Grammars of Power research program.
Abstract:
This article examines the erosion of ethos—the enunciative subject—in texts generated by artificial intelligence. Drawing from rhetorical theory and discourse analysis, it investigates how algorithmic language simulates legitimacy through grammatical impersonality, bypassing the traditional anchoring of authority in a speaker’s ethical posture. From Aristotelian rhetoric to contemporary epistemology, ethos has functioned as a foundational element in establishing discursive credibility. However, in large language models, legitimacy is produced without subjectivity, intentionality, or embodied responsibility. This paper analyzes this shift across four dimensions: the grammatical void of the enunciator, the simulation of neutrality as rhetorical strategy, the rise of impersonal structures that entail no risk or accountability, and the reader’s disoriented obedience to statements without authorship. We argue that AI-driven discourse constitutes a new form of legitimacy without subject—statistically plausible, structurally coherent, and epistemically opaque.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica