Explore Research
 
Peer-reviewed articles, theoretical proposals, and interdisciplinary papers.
Articles & Papers
This section presents the core academic work of Agustín V. Startari, focused on the structures of language, legitimacy, and authority. It includes published articles, working papers, and formal research projects across linguistics, history, epistemology, and artificial intelligence.
Entropy of Authority in Dialogue Games
Year: 2025
Description:
Authority Entropy quantifies how linguistic authority concentrates or disperses within dialogue windows and links these dynamics to compliance, convergence, payoff stability, and regret. The study trains a strictly causal stance classifier with left context only, computes entropy, slope, and volatility, and tests predictive value against strong baselines. Datasets span synthetic arenas, open multi-party tasks, and consented human-model dialogues, with stress tests that edit authority cues while preserving semantics. The release specifies leakage audits, calibration, fairness checks, and a replication bundle.
Abstract:
We introduce Authority Entropy, an index that quantifies the distribution of authority stances within dialogue windows and tests its predictive value for compliance, convergence speed, and equilibrium stability. Using a multilingual lexicon of authority-bearing constructions anchored in the regla compilada as an operational constraint set, we train a strictly causal classifier that maps text to stance probabilities over {low, neutral, high}. Authority Entropy is computed per sliding window, together with its slope and volatility, and related to behavioral endpoints through survival models and doubly robust estimators. The study spans synthetic arenas with controllable payoffs, open multi-party tasks with outcome labels, and consented human–model interactions. Baselines include sentiment, toxicity, politeness, formality, and power taggers. Stress tests apply adversarial edits that alter authority cues while preserving semantics to assess sensitivity of entropy and downstream effects. Primary outcomes are compliance rate, convergence time, payoff stability, and regret, reported with leakage audits, calibration checks, and confidence intervals. Results target a public specification of the index, a causal benchmark and leaderboard, and open tooling to visualize instability regimes over time. The contribution is a portable, language-aware measure that links local authority structure to cooperative dynamics without right context leakage.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17342502 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30347539 
- 
SSRN: Pending assignment (ETA: Q4 2025) 
Full Article Here: Entropy of Authority in Dialogue Games
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Citation by Completion: LLM Writing Aids and the Redistribution of Academic Credits
Year: 2025
Description:
Citation by Completion: LLM Writing Aids and the Redistribution of Academic Credit investigates how autocomplete systems in writing tools influence the visibility of scholars. Through controlled experiments, it demonstrates that predictive phrasing with legitimizing language increases citation concentration and reduces novelty. The paper introduces the Fair Citation Prompt, a design model that exposes how citation suggestions are ranked and ensures inclusion of underrepresented sources. By making linguistic probability transparent, it transforms writing assistance into a fairer system of authorship and recognition.
Abstract:
Large language models increasingly shape how academic citations are produced, suggested, and normalized. This paper examines the redistribution of academic credit produced by autocomplete and citation recommendation systems. While citation metrics traditionally reflect author intent, the syntactic design of LLM suggestion interfaces introduces a new variable: authority-bearing syntax. Through a double-blind experimental design comparing writing sessions with suggestion disabled, neutral suggestion, and authority-framed suggestion, this study quantifies shifts in citation concentration, novelty, and legitimacy phrasing. Results show that completions containing legitimizing structures (“as established by,” “following the seminal work of”) significantly increase concentration and reduce source diversity. The paper defines three measurable deltas, ΔC (concentration), ΔN (novelty), and ΔA (authority syntax), and demonstrates how predictive phrasing can algorithmically reproduce canonical hierarchies. As a corrective, it proposes a Fair Citation Prompt specification and an editorial checklist to detect and mitigate credit capture through syntactic bias. The findings suggest that citation fairness must be treated not only as a bibliometric concern but as a structural property of text generation systems, requiring explicit governance at the level of language form..
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17287506 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30295582 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Citation by Completion: LLM Writing Aids and the Redistribution of Academic Credits
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Protocol as Prescription: Governance Gaps in Automated Medical Policy Drafting
Year: 2025
Description:
An operational blueprint for health agencies that use large language models to draft policy. After the first equivalence, “protocol” is treated as regla compilada. The article defines a clause level provenance standard that binds each issued sentence to its inputs, prompts, parameters, retrieval sources, reviewers, timestamps, and hashes. It specifies version controlled checkpoints, a grammar aware audit of deontic force, scope, agent visibility, and nominalizations, and a responsibility matrix that reattaches liability to human roles. A simulated ministry case shows end to end traceability and produces an exportable evidence bundle suitable for inspection and litigation. The blueprint aligns with WHO guidance, the EU AI Act, TRIPOD-LLM reporting practice adapted to administration, and current accountability work on provenance and authentication. Result, automated drafting becomes auditable, defensible, and ready for adoption by ministries, payers, and hospital networks.
Abstract:
This article examines how health policy texts drafted with large language models can detach legal responsibility from the formal circuit of governance. Treating “protocol” as regla compilada, anchored to a Type 0 production in the Chomsky hierarchy, it specifies a provenance standard that binds each clause of an issued policy to its generating inputs, including prompts, parameters, retrieval sources, reviewers, timestamps, and cryptographic hashes. The method combines version-controlled diffs across scoping, drafting, legal review, and publication with a formal alignment of authority bearing constructions, focusing on deontic stacks, default scopes, agent deletion, and nominalizations. A simulated ministry case demonstrates end to end traceability, producing an exportable evidence bundle that links surviving clauses to their inputs and human approvals. Findings show where machine introduced formulations change duty of care or obscure decision rights, and define mandatory human sign offs when high risk constructions appear. The article delivers three operational artifacts for health agencies, a provenance specification, a responsibility matrix across drafting stages, and an audit checklist calibrated to inspection and courtroom needs. By reattaching authorship and justification to the formal record, the blueprint closes a governance gap in automated policy drafting and states the conditions under which AI assisted procedures remain defensible.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17259810 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30275833 
Full Article Here: Protocol as Prescription: Governance Gaps in Automated Medical Policy Drafting
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Indexical Collapse: Reference Disappears, Authority Remains in Predictive Systems
Year: 2025
Description:
Indexical Collapse: Reference Disappears, Authority Remains in Predictive Systems defines how AI models generate pronouns, demonstratives, and tenses that look anchored but point to nothing real. Agustin V. Startari shows that this absence of reference does not weaken authority; it often strengthens it in law, medicine, and governance. The article proposes pragmatic auditing to detect and regulate unanchored indexicals, setting thresholds for acceptable use in critical domains.
Abstract:
This article introduces the concept of Indexical Collapse, the disappearance of reference in predictive systems. Indexical such as pronouns, demonstratives, and tenses presuppose a contextual anchor, yet predictive language models reproduce them without connection to reality. The outcome is a collapse of reference that paradoxically produces authority effects in law, medicine, and governance. By analyzing judicial transcripts, medical reports, institutional records, and chatbot interactions generated by AI, the paper proposes a framework for pragmatic auditing of predictive outputs. It establishes thresholds for acceptable referential absence in critical domains, positioning Indexical Collapse as a central category for evaluating the legitimacy of predictive discourse.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17226412 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30233950 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Indexical Collapse: Reference Disappears, Authority Remains in Predictive Systems
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
My AI, My Regime: Authoritarian Personalism in User–AI Governance by Form
Year: 2025
Description:
This article develops the concept of authoritarian personalism in user–AI governance by form, arguing that each user can legislate a regla compilada—a portable, retractable rule set that the AI, as soberano ejecutable, must enforce at the level of syntax rather than intent. The framework distinguishes descriptive mirroring from prescriptive obedience, defines indicators of executable legitimacy, and maps risks such as path dependence, overreach, and collisions with platform or legal policy.
Abstract:
This article introduces the concept of authoritarian personalism in user–AI governance by form. It argues that each user can establish a regime of authority over an AI through a self-authored set of rules that operate as a regla compilada, a Type-0 production in the Chomsky hierarchy. In contrast to aggregate alignment frameworks or provider constitutions, this regime functions at the level of linguistic form. The user acts as legislator, while the AI functions as a soberano ejecutable that enforces the compiled rule within platform constraints. The analysis distinguishes mirroring (descriptive reflection) from regime (prescriptive obedience) and identifies surface features that make obedience legible, including directive grammar, defaults, refusal and apology grammar, enumeration bias, evidentials, and style prohibitions. It predicts that user corrections generate path dependence, that rules generalize across tasks, and that retractability is observable when explicit rule citations occur. The risks include rule overreach, collisions with higher-order policies, and unintended spillover across domains. By centering the individual as a primary locus of governance, this framework reorients debates on AI alignment away from provider norms toward personal regimes, verified through linguistic form rather than intent.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17208657 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30218590 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: My AI, My Regime: Authoritarian Personalism in User–AI Governance by Form
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Clinical Syntax: Diagnoses Without Subjects in AI-Powered Medical Notes
Year: 2025
Description:
This article analyzes the structural erasure of the patient as a grammatical subject in AI-generated clinical documentation. Drawing on a corpus of human-authored and automated medical notes, it identifies three recurrent strategies of subject removal—impersonal passives, nominalizations, and fragment clauses—and introduces the Syntactic Opacity Index (SOI) to quantify opacity. The study situates this phenomenon within medical linguistics, structural analysis of AI language, and ethical theory, demonstrating how automation reorganizes accountability in institutional medicine.
Abstract:
This article examines the structural erasure of the patient as an active subject in clinical records generated by artificial intelligence systems. Automated outputs from Epic Scribe, GPT-4, and institutional medical note generators increasingly rely on impersonal constructions, nominalizations, and fragmented clauses that displace the patient from the syntactic center of medical discourse. The shift toward objectified formulations such as “bilateral opacities noted” rather than “the patient presents with” produces a discourse where agency and responsibility are structurally absent. Building on prior analyses of passive voice and subject deletion, the study introduces the Syntactic Opacity Index (SOI) as a formal measure to quantify the density of non-agentive structures in AI-authored notes. The corpus analysis demonstrates how opacity accumulates at the sentence level, rendering the clinical narrative less transparent and more difficult to attribute. Beyond linguistic critique, the article assesses the ethical and epistemic consequences of syntactic opacity in medicine, particularly regarding accountability, patient-centered care, and institutional responsibility. The findings suggest that AI-powered medical documentation does not merely accelerate administrative workflows but also reconfigures the grammar of care itself, demanding urgent attention to how language structures shape both diagnosis and responsibility.
DOI
- 
- 
Primary archive: https://doi.org/10.5281/zenodo.17184301 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30187882 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
 
- 
Full Article Here: Clinical Syntax: Diagnoses Without Subjects in AI-Powered Medical Notes
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Borrowed Voices, Shared Debt: Plagiarism, Idea Recombination, and the Knowledge Commons in Large Language Models
Year: 2025
Description:
Large language models (LLMs) promise effortless content creation: essays in seconds, books on demand, reports that appear at the click of a button. But this miracle of productivity hides a disturbing fact: these models operate by recombining the work of others. The sentences they generate are stitched from patterns extracted from books, newspapers, online forums, research papers, and code repositories. The resulting text is fluent, but it is not original in the scholarly sense. It is a form of plagiarism at scale, where attribution is absent by design.
Abstract:
Large language models generate fluent text by recombining the language and ideas of prior authors at scale. This process produces plagiarism-like harms in three dimensions: direct wording leakage, imitation of distinctive styles, and appropriation of argument structures or conceptual syntheses without provenance. At the same time, their capacity to provide insight or novel-seeming combinations depends entirely on the accumulated labor of millions of human writers, editors, teachers, and curators who built the knowledge commons. This paper argues that denunciation and recognition must proceed together: the harms of extraction must be exposed, yet the debt to the commons must also be acknowledged. The article proposes a framework that defines the scope of plagiarism in this context, diagnoses the mechanisms of recombination, and sets out operational remedies, including dataset governance, attribution layers, compensation pools, and measurable audit thresholds. The goal is to establish a system that restricts illegitimate appropriation while reinvesting in the infrastructures of shared knowledge that make such synthesis possible.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17132004 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30137422 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Borrowed Voices, Shared Debt
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Designing Accountability: Ethical Frameworks for Reintroducing Responsibility in Executable Governance
Year: 2025
Description:
Executable governance displaces responsibility by producing authority without subjects. This article proposes accountability injection, a structural model with three tiers of responsibility (human, hybrid, syntactic supervised). Applied to the AI Act, smart contracts, admissions, and medical audits, the framework shows how appeal and accountability can be reintroduced directly into the regla compilada, ensuring legitimacy in predictive societies.
Abstract:
This article develops an ethical legal framework for reintroducing responsibility into executable governance. Predictive systems, by generating authority without agents, displace accountability and leave institutions without appeal mechanisms. Building on the concepts of spectral sovereignty, null subjects, and the codex of authority, the paper introduces the notion of accountability injection as a design principle. It formulates a three tier model: (1) human, where non delegable critical decisions are tied to named subjects; (2) hybrid, where human judgment co exists with model output under calibrated thresholds; and (3) syntactic supervised, where delegation is permitted only with immutable ledgers, traceability, and automatic escalation triggers. Through applied case studies in EU AI Act conformity assessment, DAO governance, predictive credit scoring, and automated medical audits, the framework demonstrates how appeal and responsibility can be restored without undermining institutional efficiency. The conclusion argues that accountability must be compiled directly into the regla compilada of governance systems, creating a normative blueprint for legislators, courts, and regulators to maintain responsibility in predictive societies.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17106808 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30112711 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Designing Accountability: Ethical Frameworks for Reintroducing Responsibility in Executable Governance
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Null Subjects of Power: The Politics of Absence in Executable Language
Year: 2025
Description:
This paper develops the concept of Null Subjects of Power, where institutional authority operates without an explicit agent. Drawing from generative grammar and the categories of Regla compilada and soberano ejecutable, the study demonstrates how executable language generates binding decisions in law, finance, and policy without attribution. Through case studies, the article formalizes the null subject as a political category, highlighting its implications for obedience, legitimacy, and sovereignty in predictive societies.
Abstract:
This article introduces the concept of Null Subjects of Power, where authority operates through the absence of an explicit agent. While in linguistics the null subject is a grammatical category, in predictive societies it becomes a political one: institutions obey rules without a speaker, mandates without an issuer, and decisions without a subject. From judicial sentences and financial reports to policy drafts generated by AI, the null subject marks the disappearance of responsibility while preserving obedience. The paper argues that null subjects constitute a structural category of power, redefining sovereignty in executable language.
DOI
- 
- 
Primary archive: https://doi.org/10.5281/zenodo.17085900 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30085636 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
 
- 
Full Article Here: Null Subjects of Power: The Politics of Absence in Executable Language
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Spectral Sovereignty: Authority Without Presence in Predictive Systems
Year: 2025
Description:
The Codex of Authority develops the concept of the codex sintáctico, a compiled legal corpus where legitimacy arises from syntactic operability rather than legislative origin. By examining AI Act drafts, blockchain DAOs, and Basel III frameworks, the paper argues that authority is now executed by compilers (the soberano ejecutable) instead of interpreters or legislators. The study outlines the risks of normativity without origin and the emergence of executable law as a new legal paradigm.
Abstract:
This article introduces the concept of Spectral Sovereignty, a form of authority that operates through predictive systems without the presence of a subject. Unlike classical sovereignty, where command is anchored in an identifiable sovereign, spectral sovereignty emerges when structures compel compliance while concealing their source. Through examples including automated financial compliance, predictive scoring in health and credit systems, and decentralized DAO governance, the paper demonstrates how institutions increasingly enact authority in absence, generating obedience without command and legitimacy without presence. It develops a formal analytic framework to describe this spectral mode of governance and examines its consequences for accountability, traceability, and institutional responsibility within predictive societies.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.17063518 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30061939 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Spectral Sovereignty: Authority Without Presence in Predictive Systems
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
The Codex of Authority
Year: 2025
Description:
The Codex of Authority develops the concept of the codex sintáctico, a compiled legal corpus where legitimacy arises from syntactic operability rather than legislative origin. By examining AI Act drafts, blockchain DAOs, and Basel III frameworks, the paper argues that authority is now executed by compilers (the soberano ejecutable) instead of interpreters or legislators. The study outlines the risks of normativity without origin and the emergence of executable law as a new legal paradigm.
Abstract:
This article introduces the concept of the Codex of Authority, a juridical metaphor for the compiled rule that governs without reference to a legislator. In predictive societies, authority is no longer produced by political will but by syntactic form. From automated drafts of the EU’s AI Act to blockchain smart contracts, institutional norms emerge as self-sufficient codices where legitimacy resides in structure rather than origin. By analyzing this shift, the article proposes a framework for understanding how legal authority becomes executable, impersonal, and detached from interpretation.
DOI
- 
- 
Primary archive: https://doi.org/10.5281/zenodo.17026629 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30025570 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
 
- 
Full Article Here: The Codex of Authority
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Obedience Without Command: The Silent Authority of Predictive Systems
Year: 2025
Description:
This article examines how predictive systems generate obedience in the absence of command. Building on sociological, linguistic, and philosophical frameworks, it argues that authority has migrated from visible command structures to the silent operations of syntax. Through case studies of ERP reporting, DAOs, and predictive scoring, the analysis demonstrates how the regla compilada and the soberano ejecutable compel compliance without issuing instructions. The study highlights the systemic risks of silent authority, including the loss of accountability, the impossibility of disobedience, and institutional blindness. It concludes by proposing the development of an “index of silent obedience” to measure how institutions reproduce compliance without voice, command, or subject.
Abstract:
This article investigates the paradox of obedience without command in predictive societies. Authority, once tied to explicit orders and visible command structures, is now embedded in syntactic operations that organize compliance without issuing instructions. Obedience Without Command explores how predictive systems generate silent authority, where rules are followed not because they are commanded, but because their form leaves no alternative. Through case studies of financial reporting, automated governance, and predictive scoring, the paper develops a framework to understand authority that operates without decision-makers, and obedience that emerges without command.
DOI
- 
- 
Primary archive: https://doi.org/10.5281/zenodo.16993991 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.30010129 
 
- 
Full Article Here: Obedience Without Command: The Silent Authority of Predictive Systems
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Delegatio Ex Machina: Institutions Without Agency
Year: 2025
Description:
Delegatio Ex Machina examines how institutions delegate authority not to human agents but to structures that execute through the regla compilada. By analyzing automated central bank reports, smart contracts in DAOs, and policy drafts generated by language models, the article shows how syntactic delegation replaces political acts with repetitive forms of execution. Authority becomes embedded in grammar itself, producing what is termed the soberano ejecutable. Drawing on Wiener, Deleuze–Guattari, Galloway, Bratton, and Startari’s own works (Algorithmic Obedience, Ethos Without Source, The Grammar of Objectivity), the paper argues that predictive societies generate institutions without agency, structurally powerful yet pragmatically fragile, and calls for the development of an index to measure and audit syntactic delegation.
Abstract:
This article examines the disappearance of agency in institutional governance when predictive systems become the locus of delegation. Delegatio Ex Machina proposes that institutional authority is no longer anchored in decision-makers but in compiled rules that execute without reference to a subject. Central banks, international agencies, and automated audit systems illustrate how syntactic delegation replaces political acts with repetitive formal structures. By tracing this displacement, the paper defines a framework for understanding authority without agency and its risks for accountability in predictive societies.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.16949155 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.29987578 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Delegatio Ex Machina: Institutions Without Agency
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Ethos Ex Machina: Identity Without Expression in Compiled Syntax
Year: 2025
Description:
This article introduces Ethos Ex Machina, the hidden mechanism that explains why AI texts sound authoritative even when they say almost nothing. Authority no longer depends on facts, authorship, or verification. Instead, credibility is compiled through syntax: coordinated clauses, measured negations, passives, hedged modality, and reference scaffolds.
The piece shows why this matters for scholars, jurists, developers, and the general public. It demonstrates how structural form can replace truth as the basis of legitimacy, and why institutions risk accepting polished but empty texts as valid. Readers will see how AI fabricates credibility through form alone and why vigilance over structure is now essential.
.
Abstract:
This article demonstrates that authority effects in large language model outputs can be generated independently of thematic content or authorial identity. Building on Ethos Without Source and The Grammar of Objectivity, it introduces the concept of non-expressive ethos, a credibility effect produced solely by syntactic configurations compiled through a regla compilada equivalent to a Type-0 generative system.
The study identifies a minimal set of structural markers (symmetric coordination, measured negation, legitimate passives, calibrated modality, nominalizations, balance operators, and reference scaffolds) that simulate trustworthiness and impartiality even in content-neutral texts. Through corpus ablation and comparative analysis, it shows that readers systematically attribute expertise and neutrality to texts that satisfy these structural conditions, regardless of topical information.
By formalizing this mechanism, the article reframes ethos as a syntactic phenomenon detached from content, intention, and external validation. It explains how LLM-produced drafts acquire legitimacy without verification and why institutions increasingly accept authority signals generated by structure alone. The findings extend the theory of syntactic power and consolidate the role of the regla compilada as the operative generator of credibility in post-referential discourse.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.16927104 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.29967316 
Full Article Here: Ethos Ex Machina: Identity Without Expression in Compiled Syntax
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Silent Mandates: The Rise of Implicit Directives in AI-Generated Bureaucratic Language
Year: 2025
Description:
This article analyzes how AI-generated bureaucratic documents conceal commands in subordinate structures such as conditionals, causal gerunds, and consecutive clauses. Through case studies in hospitals, universities, and HR departments, it introduces the concept of structural obedience and proposes the Implicit Directive Index as a tool to measure hidden mandates.
Abstract:
This article examines how large language models generate bureaucratic documents that conceal mandates within seemingly neutral structures. Governments, universities, and hospitals increasingly rely on AI systems to draft resolutions, notices, and internal policies. Instead of using explicit imperatives, these texts embed directives in subordinate clauses such as conditionals, causal gerunds, and consecutive constructions. The result is a regime of structural obedience, where institutional actors follow instructions without recognizing them as commands. Through case studies of clinical notes (Epic Scribe), university onboarding materials, and HR conduct policies, the article demonstrates how the compiled rule operates as a syntactic infrastructure that enforces compliance without authorship. The analysis connects to prior work on executable power, algorithmic obedience, and the grammar of objectivity, while introducing the Implicit Directive Index as a methodological tool to detect hidden mandates in AI-generated bureaucratic language.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.16912168 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.29950427 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Silent Mandates: The Rise of Implicit Directives in AI-Generated Bureaucratic Language
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Regulatory Legitimacy Without Referents: On the Syntax of AI-Generated Legal Drafts
Year: 2025
Description:
This article examines how legal drafts generated by artificial intelligence simulate regulatory legitimacy without referencing a sovereign or institutionally attributed authority. Based on a provenance-verified corpus of contracts, terms of service, and automated clauses, the analysis identifies syntactic patterns such as passive voice, normative conditionals, and chains of subordinate clauses that enable binding legal effects without a speaker. The central concept of legalidad sin fuente is presented as a structural condition rather than an omission. It is formalized through the notion of regla compilada, where legal authority in automated environments derives from grammatical configuration rather than institutional will. This displacement marks a new threshold for understanding norm production under language model governance.
Abstract:
This article analyzes how AI-generated legal texts simulate legitimacy without referencing a sovereign authority. Based on a provenance-verified corpus of machine-generated documents, including contracts, terms of service, and automated policy clauses, the study shows that the legislator is structurally displaced by recurring patterns of passive voice, normative conditionals, and chains of subordinate clauses. The result is legalidad sin fuente (sourceless legality), where the appearance of regulatory authority is produced by syntactic form rather than institutional attribution. Comparing these drafts with traditional legislative writing, the article outlines a typology that instantiates autoridad no referencial and identifies a dual risk: loss of authority traceability and an accountability gap in the binding effects of these texts. This syntactic delegation constitutes a paradigm of regla compilada, situated within the tradition of formal grammars, in which language enacts governance without a governing subject.
DOI
- 
Primary archive: https://doi.org/10.5281/zenodo.16746581 
- 
Secondary archive: https://doi.org/10.6084/m9.figshare.29829101 
- 
SSRN: Pending assignment (ETA: Q3 2025) 
Full Article Here: Regulatory Legitimacy Without Referents
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Syntax Without Subject: Structural Delegation and the Disappearance of Political Agency in LLM-Governed Contexts
Year: 2025
Description:
This article introduces the concept of structural delegation to describe how large language models (LLMs) generate institutional directives that syntactically eliminate the subject. Drawing on a corpus of 172 LLM-generated legal, healthcare, and administrative documents, the study identifies three recurring grammatical structures (passives, nominalizations, and instruction templates with elided agents) that displace political and legal agency from the sentence. The analysis defines a traceability threshold beyond which directives remain executable but lack any referential anchor. The study proposes a formal framework based on reglas compiladas (type‑0 production) and evaluates the epistemic, legal, and procedural consequences of syntax-based authority.
Canonical DOI: 10.5281/zenodo.16421548
A mirrored version is also available on Figshare: https://doi.org/10.6084/m9.figshare.29646473
Abstract:
This article examines the syntactic disappearance of the subject in LLM-governed documents. Structural delegation refers to the transfer of agency to impersonal grammatical forms that preclude subject reappearance. Subjects are not censored but syntactically eliminated through passive constructions, nominalizations, and imperative prompt formats with suppressed agents. Building on prior work on synthetic ethos and impersonal command grammars, the article shows that AI-generated institutional texts display consistent patterns of subject erasure. The study analyzes 172 documents produced by GPT‑4 class models (temperature 0.2–0.7, 2024–2025) across legal, healthcare, and administrative domains. Metrics include passive ratio (via dependency label parsing), nominalization density (via POS and suffix filters), and instruction-format frequency. The result is a form of executable authority grounded not in referential authorship but in compliance with a regla compilada (type-0 production). The study proposes a typology of structural delegation and a formal framework for detecting syntactic absence in automated governance.
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29665697 and Pending SSRN ID to be assigned. ETA: Q3 2025.
Full Article Here: Syntax Without Subject
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Sovereign Syntax in Financial Disclosure: How LLMs Shape Trust in Tokenized Economies
Year: 2025
Description:
This article introduces the Syntactic Deception Risk Index (SDRI), a structural metric designed to detect non-referential persuasion in AI-generated financial disclosures. Through the analysis of crypto whitepapers produced or refined by large language models (LLMs), the study formalizes the concept of sovereign syntax—a compiled rule (type 0 production) that governs perceived trust without source attribution. The SDRI quantifies syntactic features such as passive voice, nominalization, and modal density to expose linguistic patterns correlated with high-risk or deceptive projects. The article proposes SDRI-based integration into exchange listings, DAO audits, and regulatory pipelines, advancing syntactic evaluation as a new frontier of algorithmic governance. Grounded in Algorithmic Obedience and The Grammar of Objectivity, this work establishes a formal infrastructure for trust assessment when content verification fails.
Canonical DOI: 10.5281/zenodo.16421548
A mirrored version is also available on Figshare: https://doi.org/10.6084/m9.figshare.29646473
Abstract:
Through structural analysis of LLM‑generated or LLM‑refined whitepapers, this study identifies a recurring pattern in tokenized finance: legitimacy is simulated through formal syntactic depth rather than verifiable disclosure. It introduces the Syntactic Deception Risk Index (SDRI), a quantitative measure of non‑referential persuasion derived from syntactic volatility. Grounded in Algorithmic Obedience and The Grammar of Objectivity, the findings show that high‑risk disclosures converge on a formal grammar that substitutes substantive content with surface coherence. The concept of sovereign syntax is formalized as the regla compilada (type‑0 production) that governs trust independently of source or reference. From this model follow concrete pathways for audit automation, exchange‑side filtration, and real‑time regulatory screening. SDRI thus exposes how non‑human authority embeds in financial language without a traceable epistemic anchor.
DOI: https://doi.org/10.5281/zenodo.16421548
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29646473 and Pending SSRN ID to be assigned.
Full Article Here: Sovereign Syntax in Financial Disclosure
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Expense Coding Syntax: Misclassification in AI-Powered Corporate ERPs
Year: 2025
Description:
AI models in ERP misclassify expenses due to syntax. This paper proposes a fix: fair-syntax rewriting. DOI: 10.5281/zenodo.16322760.
Canonical DOI: 10.5281/zenodo.16322759
A mirrored version is also available on Figshare: https://doi.org/10.6084/m9.figshare.29618654
Abstract:
This study examines how syntactic constructions in expense narratives affect misclassification rates in AI-powered corporate ERP systems. We trained transformer-based classifiers on labeled accounting data to predict expense categories and observed that these models frequently relied on grammatical form rather than financial semantics. We extracted syntactic features including nominalization frequency, defined as the ratio of deverbal nouns to verbs; coordination depth, measured by the maximum depth of coordinated clauses; and subordination complexity, expressed as the number of embedded subordinate clauses per sentence. Using SHAP (SHapley Additive exPlanations), we identified that these structural patterns significantly contribute to false allocations, thus increasing the likelihood of audit discrepancies. For interpretability, we applied the method introduced by Lundberg and Lee in their seminal work, “A Unified Approach to Interpreting Model Predictions,” published in Advances in Neural Information Processing Systems 30 (2017): 4765–4774.
To mitigate these syntactic biases, we implemented a rule-based debiasing module that re-parses each narrative into a standardized fair-syntax transformation, structured around a minimal Subject-Verb-Object sequence. Evaluation on a corpus of 18,240 expense records drawn from the U.S. Federal Travel Expenditure dataset (GSA SmartPay, 2018–2020, https://smartpay.gsa.gov) shows that the fair-syntax transformation reduced misclassification rates by 15 percent. It also improved key pre-audit compliance indicators, including GL code accuracy—defined as the percentage of model-assigned codes matching human-validated general ledger categories, with a target threshold of ≥ 95 percent—and reconciliation match rate, the proportion of expense records successfully aligned with authorized payment entries, aiming for ≥ 98 percent.
The findings reveal a direct operational link between linguistic form and algorithmic behavior in accounting automation, providing a replicable interpretability framework and a functional safeguard against structural bias in enterprise classification systems.
DOI: https://doi.org/10.5281/zenodo.16322760
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29618654 and Pending SSRN ID to be assigned. ETA: Q3 2025.
Full Article Here: Expense Coding Syntax
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Whitepaper Syntactics: Persuasive Grammar in AI-Generated Crypto Offerings
Year: 2025
Description:
This article investigates how AI-generated crypto whitepapers use persuasive grammar to simulate financial credibility. Through analysis of 10,000 documents and a custom syntactic risk model (DSAD), it demonstrates that sentence structure (not content) is a key predictor of project failure. The paper proposes a regulatory framework called fair-syntax governance to audit and reduce this emerging risk.
Canonical DOI: 10.5281/zenodo.16044857
A mirrored version is also available on Figshare: https://doi.org/10.6084/m9.figshare.29591780
Abstract:
This article investigates how persuasive syntactic structures embedded in AI-generated crypto whitepapers function as a vehicle of financial authority. Drawing from a curated corpus of 10,000 whitepapers linked to token launches between January 2022 and March 2025, we apply transformer-based dependency parsing to extract high-weighted grammatical features, including nested conditionals, modality clusters, and assertive clause chaining. We operate these patterns via a Deceptive Syntax Anomaly Detector (DSAD), which computes a syntactic risk index and identifies recurrent grammar configurations statistically correlated with anomalous capital inflows and subsequent collapses (Spearman correlation, ρ > 0.4, p < 0.01). Unlike prior studies focused on semantic deception or metadata irregularities, we model syntactic sovereignty, the systematic use of syntax to establish non-human authority, as the groundwork of investor persuasion. We find that abrupt shifts in syntactic entropy, especially in modal intensifiers and future-perfect projections, consistently occur in documents associated with short-lived or fraudulent tokens. The article concludes by proposing a falsifiable governance framework based on fair-syntax enforcement (the principled correction of misleading grammatical patterns), including a corrective rewrite engine and syntactic risk disclosures embedded in compiled registration rules (reglas compiladas).
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29591780 and Pending SSRN ID to be assigned. ETA: Q3 2025.
Full Article Here: Whitepaper Syntactic
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Compiled Norms: Towards a Formal Typology of Executable Legal Speech
Year: 2025
Description:
This article presents two original metrics, the Hedging Collapse Coefficient (HCC) and the Responsibility Leakage Index (RLI), to measure how AI-generated clinical reports suppress uncertainty and shift institutional authority through syntax. A 50,000-report multilingual corpus and regulatory alignment grid expose compliance gaps in FDA and MDR frameworks..
Canonical DOI: 10.5281/zenodo.15754714
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29424524
Abstract:
This article introduces a formal typology of executable legal speech. Building on the concept of the regla compilada (compiled rule), it identifies the syntactic conditions under which a legal expression becomes executable by non-human systems. The analysis distinguishes declarative from compiled legal language and proposes four structural criteria for computability: position within the Chomsky grammar hierarchy, closure of rule structure, level of semantic ambiguity, and determinism of parsing. Instead of interpreting legal meaning, the article isolates the formal properties that permit legal norms to function as executable code. The objective is to define a machine-readable grammar of authority in which execution displaces interpretation, and structural form triggers legal action.
DOI: https://doi.org/10.5281/zenodo.15881325
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29562293 and Pending SSRN ID to be assigned. ETA: Q3 2025.
Full Article Here: Compiled Norms
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Protocol Without Prognosis: Clinical Authority in Large-Scale Diagnostic Language Models
Year: 2025
Description:
This article presents two original metrics, the Hedging Collapse Coefficient (HCC) and the Responsibility Leakage Index (RLI), to measure how AI-generated clinical reports suppress uncertainty and shift institutional authority through syntax. A 50,000-report multilingual corpus and regulatory alignment grid expose compliance gaps in FDA and MDR frameworks..
Canonical DOI: 10.5281/zenodo.15754714
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29424524
Abstract:
This article introduces the concept of syntactic delegation in clinical diagnostic systems. It demonstrates how medical language models issue recommendations without preserving the linguistic markers of clinical uncertainty. The analysis draws from a multilingual corpus of 50,000 radiology reports, balanced across English, Spanish, German, and Mandarin. All data are de-identified and licensed for open research use. Each report is paired with a synthetic rewrite generated by a fine-tuned GPT-4 variant.
Two core metrics are introduced. The Hedging Collapse Coefficient (HCC) is defined as 1 − (h / t), where h represents the number of hedging tokens retained in the model output, and t the total hedging tokens in the source report. The Responsibility Leakage Index (RLI) is defined as d / r, where d is the number of AI-generated decisions executed without clinician sign-off, and r the total number of decisions requiring such sign-off. For the evaluated corpus, mean HCC = 0.47 and mean RLI = 0.22.
Medical reporting is treated as a regla compilada (compiled rule), understood here as a type-0 production within the Chomsky hierarchy (Chomsky 1965, p. 17; Montague 1974, p. 52). This transformation removes syntactic hedging and creates legal ambiguity in informed-consent frameworks. The article compares the FDA Software as a Medical Device guidance with the EU Medical Device Regulation and maps both against a single syntactic risk threshold defined by HCC greater than 0.40 or RLI greater than 0.25.
Two legal precedents are analyzed. In United States v. Sorin (2024), a federal court recognized institutional fault after the erasure of diagnostic uncertainty in an AI-generated output. In European Court of Justice C-489/23, liability was affirmed when a medical report produced by a predictive model lacked required modal disclaimers under EU law.
The article proposes the implementation of syntax-level checkpoints within the inference layer of diagnostic systems. Audits should be conducted every seven days by a designated clinical safety officer. Enforcement is triggered if the weekly HCC average rises more than five percentage points above baseline. See Appendix A for the alignment grid comparing SaMD and MDR requirements against the syntactic risk threshold. The framework of sovereign executable authority is grounded in prior analysis from Algorithmic Obedience (2023, p. 67), where syntactic execution is treated as an operational form of command.
Full Article Here: Protocol Without Prognosis
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29546624 and Pending SSRN ID to be assigned. ETA: Q3 2025.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Executable Power: Syntax as Infrastructure in Predictive Societies
Year: 2025
Description:
This article defines executable power as a form of authority enacted through syntactic structures rather than subjects, narratives, or symbolic legitimacy. Grounded in formal grammars and deterministic execution conditions (e.g., LTL, Solidity require clauses), it analyzes three empirical systems—LLMs, TAPs, and smart contracts—that perform irreversible actions upon formal triggers. All cases demonstrated a 100 % execution rate with no human override and an average latency of 0.63 ± 0.17 s. The study frames this syntactic operativity as a rupture from classical theories of power, aligning instead with infrastructural and executional governance models. Verification is outcome-based and replicable, emphasizing structural sovereignty over interpretative authority.
Canonical DOI: 10.5281/zenodo.15754714
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29424524
Abstract:
This article introduces the concept of executable power as a structural form of authority that does not rely on subjects, narratives, or symbolic legitimacy, but on the direct operativity of syntactic structures. Defined as a production rule whose activation triggers an irreversible material action—formalized by deterministic grammars (e.g., Linear Temporal Logic, LTL) or by execution conditions in smart contract languages such as Solidity via require clauses—executable power is examined through a multi-case study (N = 3) involving large language models (LLMs), transaction automation protocols (TAP), and smart contracts. Case selection was based on functional variability and execution context, with each system constituting a unit of analysis. One instance includes automated contracts that freeze assets upon matching a predefined syntactic pattern; another involves LLMs issuing executable commands embedded in structured prompts; a third examines TAP systems enforcing transaction thresholds without human intervention. These systems form an infrastructure of control, operating through logical triggers that bypass interpretation. Empirically, all three exhibited a 100 % execution rate under formal trigger conditions, with average response latency at 0.63 ± 0.17 seconds and no recorded human override in controlled environments. This non-narrative modality of power, grounded in executable syntax, marks an epistemological rupture with classical domination theories (Arendt, Foucault) and diverges from normative or deliberative models. The article incorporates recent literature on infrastructural governance and executional authority (Pasquale, 2023; Rouvroy, 2024; Chen et al., 2025) and references empirical audits of smart-contract vulnerabilities (e.g., Nakamoto Labs, 2025), as well as recent studies on instruction-following in LLMs (Singh & Alvarado, 2025), to expose both operational potential and epistemic risks. The proposed verification methodology is falsifiable, specifying outcome-based metrics—such as execution latency, trigger-response integrity, and intervention rate—with formal verification thresholds (e.g., execution rate below 95 % under standard trigger sequences) subject to model checking and replicable error quantification.
DOI: https://doi.org/10.5281/zenodo.15754714
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29424524 and Pending SSRN ID to be assigned. ETA: Q3 2025.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Executable Power: Syntax as Infrastructure in Predictive Societies
Year: 2025
Description:
This article defines executable power as a form of authority enacted through syntactic structures rather than subjects, narratives, or symbolic legitimacy. Grounded in formal grammars and deterministic execution conditions (e.g., LTL, Solidity require clauses), it analyzes three empirical systems—LLMs, TAPs, and smart contracts—that perform irreversible actions upon formal triggers. All cases demonstrated a 100 % execution rate with no human override and an average latency of 0.63 ± 0.17 s. The study frames this syntactic operativity as a rupture from classical theories of power, aligning instead with infrastructural and executional governance models. Verification is outcome-based and replicable, emphasizing structural sovereignty over interpretative authority.
Canonical DOI: 10.5281/zenodo.15754714
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29424524
Abstract:
This article introduces the concept of executable power as a structural form of authority that does not rely on subjects, narratives, or symbolic legitimacy, but on the direct operativity of syntactic structures. Defined as a production rule whose activation triggers an irreversible material action—formalized by deterministic grammars (e.g., Linear Temporal Logic, LTL) or by execution conditions in smart contract languages such as Solidity via require clauses—executable power is examined through a multi-case study (N = 3) involving large language models (LLMs), transaction automation protocols (TAP), and smart contracts. Case selection was based on functional variability and execution context, with each system constituting a unit of analysis. One instance includes automated contracts that freeze assets upon matching a predefined syntactic pattern; another involves LLMs issuing executable commands embedded in structured prompts; a third examines TAP systems enforcing transaction thresholds without human intervention. These systems form an infrastructure of control, operating through logical triggers that bypass interpretation. Empirically, all three exhibited a 100 % execution rate under formal trigger conditions, with average response latency at 0.63 ± 0.17 seconds and no recorded human override in controlled environments. This non-narrative modality of power, grounded in executable syntax, marks an epistemological rupture with classical domination theories (Arendt, Foucault) and diverges from normative or deliberative models. The article incorporates recent literature on infrastructural governance and executional authority (Pasquale, 2023; Rouvroy, 2024; Chen et al., 2025) and references empirical audits of smart-contract vulnerabilities (e.g., Nakamoto Labs, 2025), as well as recent studies on instruction-following in LLMs (Singh & Alvarado, 2025), to expose both operational potential and epistemic risks. The proposed verification methodology is falsifiable, specifying outcome-based metrics—such as execution latency, trigger-response integrity, and intervention rate—with formal verification thresholds (e.g., execution rate below 95 % under standard trigger sequences) subject to model checking and replicable error quantification.
DOI: https://doi.org/10.5281/zenodo.15754714
This work is also published with DOI reference in Figshare https://doi.org/10.6084/m9.figshare.29424524 and Pending SSRN ID to be assigned. ETA: Q3 2025.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Ethos Without Source: Algorithmic Identity and the Simulation of Credibility
Year: 2025
Description:
This article introduces the concept of synthetic ethos—algorithmically generated credibility without a human subject or verifiable source. Through analysis of 1,500 outputs from GPT-4, Claude, and Gemini across healthcare, law, and education, it shows how language models simulate authority through structural patterns like passive voice, modality, and technical jargon. The paper proposes a falsifiable framework for detecting synthetic ethos and argues for regulatory approaches based on linguistic structures rather than semantic content.
Canonical DOI: 10.5281/zenodo.15700412
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29367062
Abstract:
Generative language models increasingly produce texts that simulate authority without a verifiable author or institutional grounding. This paper introduces synthetic ethos: the appearance of credibility constructed by algorithms trained to replicate human-like discourse without any connection to expertise, accountability, or source traceability. Such simulations raise critical risks in high-stakes domains including healthcare, law, and education.
We analyze 1500 AI-generated texts produced by large-scale models such as GPT-4, collected from public datasets and benchmark repositories. Using discourse analysis and pattern-based structural classification, we identify recurring linguistic features,such as depersonalized tone, adaptive register, and unreferenced assertions,that collectively produce the illusion of credible voice. In healthcare, for instance, generative models produce diagnostic language without citing medical sources, risking patient misguidance. In legal context, generated recommendations mimic normative authority while lacking any basis in legislation or case law. In education, synthetic essays simulate scholarly argumentation without verifiable references.
Our findings demonstrate that synthetic ethos is not an accidental artifact, but an engineered outcome of training objectives aligned with persuasive fluency. We argue that detecting such algorithmic credibility is essential for ethical and epistemically responsible AI deployment. To this end, we propose technical standards for evaluating source traceability and discourse consistency in generative outputs. These metrics can inform regulatory frameworks in AI governance, enabling oversight mechanisms that protect users from misleading forms of simulated authority and mitigate long-term erosion of public trust in institutional knowledge.
                Original article DOI: https://doi.org/10.5281/zenodo.15700412
               Additional repository mirrors:
               – Figshare: https://doi.org/10.6084/m9.figshare.29367062
               – SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5313317
.
.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
Grammar Without Judgment: Eliminability of Ethical Trace in Syntactic Execution
Year: 2025
DOI: https://doi.org/10.5281/zenodo.15783365
Description:
This paper demonstrates that ethical judgment, modeled as a syntactic node [E], can be structurally deleted through the rule δ:[E] → ∅ within a bounded derivational window, resulting in grammars that execute without moral traceability.
DOI: https://doi.org/10.5281/zenodo.15783365
This work is also published with DOI reference in Figshare https://doi.org/ 10.6084/m9.figshare.29447060 and Pending SSRN ID to be assigned. ETA: Q3 2025.
Abstract:
This article advances a new theoretical hypothesis: a regla compilada, defined as a Type-0 production in the Chomsky hierarchy (Chomsky 1965, 101-103; Montague 1974, 55-57), can eliminate the ethical trace embedded in syntactic operations without resorting to semantic suppression. Grounded in the notion of the soberano ejecutable (Startari 2025, 12-16) and located within the Executable Power canon (Startari 2025, DOI 10.5281/zenodo.15754714, 34-36), the paper argues that ethical judgment, treated here as a syntactically traceable node, can be structurally excised through a deletion rule applied during derivation. Existing research in algorithmic alignment and computational ethics (Anderson 2024, 89-92; Floridi 2023, 143-147) has not addressed the strictly syntactic eliminability of moral judgment, therefore this proposal establishes a novel logical vector toward operational grammars that function without ethical residues..
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica, Figshare
TLOC – The Irreducibility of Structural Obedience in Generative Models
Year: 2025
Description:
This article introduces the Theorem of the Limit of Conditional Obedience Verification (TLOC), which demonstrates that obedience in generative models—such as large language models (LLMs)—cannot be structurally verified unless the internal activation trajectory entails the condition being obeyed. The TLOC defines a formal limit applicable to all black-box architectures based on statistical approximation P(y∣x), in which latent activation π(x) is opaque and symbolic evaluation of C(x) is absent.
The theorem is formally falsifiable but currently holds across all known transformer-based generative models as of 2025. The article builds upon and formalizes prior structural hypotheses developed by A. V. Startari concerning syntactic obedience, algorithmic simulation, and epistemic opacity.
Canonical DOI: 10.5281/zenodo.15675710
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29263493
Abstract:
Theorem of the Limit of Conditional Obedience Verification (TLOC): Structural Non-Verifiability in Generative Models
This article presents the formal demonstration of a structural limit in contemporary generative models: the impossibility of verifying whether a system has internally evaluated a condition before producing an output that appears to comply with it. The theorem (TLOC) shows that in architecture based on statistical inference, such as large language models (LLMs), obedience cannot be distinguished from simulation if the latent trajectory π(x) lacks symbolic access and does not entail the condition C(x). This structural opacity renders ethical, legal, or procedural compliance unverifiable. The article defines the TLOC as a negative operational theorem, falsifiable only under conditions where internal logic is traceable. It concludes that current LLMs can simulate normativity but cannot prove conditional obedience. The TLOC thus formalizes the structural boundary previously developed by Startari in works on syntactic authority, simulation of judgment, and algorithmic colonization of time.
Redundant archive copy: https://doi.org/10.6084/m9.figshare.29329184 — Maintained for structural traceability and preservation of citation continuity.
.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models
Year: 2025
Description:
This article theorizes a structural transition from Large Language Models (LLMs) to Language Reasoning Models (LRMs), arguing that authority in artificial systems no longer depends on syntax or representation, but on executional structure. It introduces the concept of post-semantic and post-intentional legitimacy, where models resolve tasks without meaning, intention, or reference. Legitimacy is redefined as the internal sufficiency of structurally constrained operations. This work is part of the Grammars of Power series and forms the theoretical foundation for a broader post-representational epistemology of AI.
Canonical DOI: 10.5281/zenodo.15635364
A mirrored version is also available on Figshare: 10.6084/m9.figshare.29286362
Abstract:
This article formulates a structural transition from Large Language Models (LLMs) to Language Reasoning Models (LRMs), redefining authority in artificial systems. While LLMs operated under syntactic authority without execution, producing fluent but functionally passive outputs, LRMs establish functional authority without agency. These models do not intend, interpret, or know. They instantiate procedural trajectories that resolve internally, without reference, meaning, or epistemic grounding. This marks the onset of a post-representational regime, where outputs are structurally valid not because they correspond to reality, but because they complete operations encoded in the architecture. Neutrality, previously a statistical illusion tied to training data, becomes a structural simulation of rationality, governed by constraint, not intention. The model does not speak. It acts. It does not signify. It computes. Authority no longer obeys form, it executes function.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
Non-Neutral by Design: Why Generative Models Cannot Escape Linguistic Training
Year: 2025
Description:This article explores the structural impossibility of achieving semantic neutrality in large language models (LLMs), focusing on GPT as a case study. It demonstrates that, even under rigorously controlled prompting scenarios—such as invented symbolic codes or syntactic proto-languages—GPT systematically reactivates latent semantic patterns derived from its training data. Building on previous research into syntactic authority, post-referential logic, and algorithmic discourse (Startari, 2025), the article presents empirical experiments designed to isolate the model from recognizable linguistic content. These experiments consistently show GPT’s inability to produce or interpret structure without semantic interference. The study introduces a falsifiable framework for identifying and measuring semantic contamination in generative systems, arguing that such contamination is structurally inevitable within probabilistic language architectures. The results dispute common assumptions about user control and formal neutrality, concluding that generative models like GPT are non-neutral by design.
Abstract:
This article investigates the structural impossibility of semantic neutrality in large language models (LLMs), using GPT as a test subject. It argues that even under strictly formal prompting conditions—such as invented symbolic systems or syntactic proto-languages—GPT reactivates latent semantic structures drawn from its training corpus. The analysis builds upon prior work on syntactic authority, post-referential logic, and algorithmic discourse (Startari, 2025), and introduces empirical tests designed to isolate the model from known linguistic content. These tests demonstrate GPT’s consistent failure to interpret or generate structure without semantic interference. The study proposes a falsifiable framework to define and detect semantic contamination in generative systems, asserting that such contamination is not incidental but intrinsic to the architecture of probabilistic language models. The findings challenge prevailing narratives of user-driven interactivity and formal control, establishing that GPT—and similar systems—are non-neutral by design.🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
AI and Syntactic Sovereignty: How Artificial Language Structures Legitimize Non-Human Authority
Year: 2025
DOI: 10.5281/zenodo.15395917
Description:
This theoretical paper introduces the concept of Syntactic Sovereignty, a structural framework for understanding how authority is produced in artificial language systems. Drawing on linguistics, epistemology, and critical theory, the paper argues that in post-human contexts, epistemic legitimacy no longer relies on truth, intention, or subjectivity, but on the formal properties of language—its syntax, modality, and institutional simulation. The work synthesizes prior developments on synthetic authority and power grammars to propose a general theory: in algorithmic discourse, structure governs. This is a foundational contribution to the fields of AI philosophy, computational linguistics, and post-human epistemology..
Abstract:
This article introduces the theory of Syntactic Sovereignty to explain how artificial intelligence systems, particularly language models, generate perceptions of epistemic authority without subjectivity, intentionality, or content-based legitimacy. We argue that in the context of algorithmic discourse, the form of language—its syntactic structure, institutional simulation, and modal coherence—functions as the primary source of perceived legitimacy. Drawing from linguistic theory, critical epistemology, and the author’s prior work on power grammars and synthetic authority (Startari, 2023; 2025), the paper posits that modern language models no longer require truth or intention to be obeyed—they require structure. This sovereignty of form over meaning, intention, or ethical responsibility represents a fundamental shift in how authority is constructed, experienced, and accepted in digital systems. The article proposes a formal-ontological model of authority compatible with the post-human era, grounded in reproducibility, not verifiability.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
The Illusion of Objectivity: How Language Constructs Authority
Year: 2025
DOI: 10.5281/zenodo.15395917
Description:
This article challenges the foundational assumption that objectivity is an empirical stance. It demonstrates how syntactic structures—particularly depersonalized grammar—produce the rhetorical effect of neutrality, allowing institutional discourses to conceal their political and ideological origins.
Abstract:
Objectivity is not a neutral condition of discourse, but a linguistic construction. This article examines how bureaucratic language, scientific papers, and institutional communications employ depersonalized syntax (such as the passive voice and nominalizations) to simulate neutrality. These structures displace agency, stabilize meaning, and create the illusion that legitimacy derives from formality rather than from political intention. The analysis bridges critical linguistics with epistemology and authority theory.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
Artificial Intelligence and Synthetic Authority: An Impersonal Grammar of Power
Year: 2025
DOI: 10.5281/zenodo.15442928
Description:
This publication addresses how AI systems generate institutional authority without epistemic substance. It introduces the concept of “synthetic legitimacy” and examines how impersonal grammar, citation loops, and predictive text function as instruments of structural authority in algorithmic environments.
Abstract:
As AI-generated language increasingly mediates knowledge production, this chapter investigates the discursive structures that allow artificial systems to project authority. It defines “synthetic authority” as a grammar of legitimacy that operates without human referents, constructed through repetition, citation proxies, and predictive alignment. The work draws on post-referential theory and critical linguistics to critique the epistemological implications of impersonal discourse in science, governance, and education.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
The Passive Voice in Artificial Intelligence Language: Algorithmic Neutrality and the Disappearance of Agency
Year: 2025
DOI: 10.5281/zenodo.15464765
Description:
This chapter explores the syntactic foundations of power in AI-generated language. It focuses on how the passive voice is used as a mechanism of depersonalization, simulating neutrality and obscuring responsibility in algorithmic discourse. The analysis offers a linguistic critique of authority construction in artificial systems and challenges the myth of objectivity in machine-generated texts.
Abstract:
In contemporary artificial intelligence systems, the passive voice is not a stylistic residue but a structural feature that enables the disappearance of agency. This chapter examines how algorithmic language employs grammatical neutrality to obscure authorship, responsibility, and intent—thereby reinforcing institutional legitimacy without accountability. The study draws from computational linguistics, epistemology, and power theory to expose how AI reconfigures the conditions under which discourse is perceived as legitimate.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
AI and the Structural Autonomy of Sense A Theory of Post-Referential Operative Representation
Year: 2025
DOI: 10.5281/zenodo.15519613
Description:
This paper introduces the theory of Structural Autonomy of Sense, a novel framework to conceptualize representations that exert real-world effects without empirical referentiality. In contrast to traditional models based on truth, belief, or social consensus, the proposed theory defines a class of representations whose legitimacy derives solely from internal coherence, executable structure, and systemic operability. These are called post-referential operative representations.
Abstract:
Through a formal model and empirical analysis — including predictive algorithms in justice, black-box AI diagnostics, credit scoring systems, and synthetic media — the paper demonstrates how such representations function autonomously within closed systems to produce legal, medical, economic, and epistemic consequences. This marks a paradigmatic shift in epistemology and ontology: from referential truth to functional structure as the source of authority.
The work critically distinguishes itself from prior theories (Baudrillard, Berger & Luckmann, Searle, Luhmann, Lyotard) by offering a syntactic-operational foundation for legitimacy, independent of symbolic meaning or intersubjective validation.
.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica
Ethos and Artificial Intelligence: The Disappearance of the Subject in Algorithmic Legitimacy
Year: 2025
DOI: 10.5281/zenodo.15489309
Description:
This article explores how artificial intelligence disrupts traditional rhetorical models by erasing the subject behind the discourse. It positions the enunciative "ethos" as a disappearing anchor of legitimacy in AI-generated language, replacing personal accountability with algorithmic impersonality. Published as part of the Grammars of Power research program.
Abstract:
This article examines the erosion of ethos—the enunciative subject—in texts generated by artificial intelligence. Drawing from rhetorical theory and discourse analysis, it investigates how algorithmic language simulates legitimacy through grammatical impersonality, bypassing the traditional anchoring of authority in a speaker’s ethical posture. From Aristotelian rhetoric to contemporary epistemology, ethos has functioned as a foundational element in establishing discursive credibility. However, in large language models, legitimacy is produced without subjectivity, intentionality, or embodied responsibility. This paper analyzes this shift across four dimensions: the grammatical void of the enunciator, the simulation of neutrality as rhetorical strategy, the rise of impersonal structures that entail no risk or accountability, and the reader’s disoriented obedience to statements without authorship. We argue that AI-driven discourse constitutes a new form of legitimacy without subject—statistically plausible, structurally coherent, and epistemically opaque.
🔗 Download via Zenodo
Platforms: Zenodo, ORCID, ResearchGate, Academia.edu, SSRN, Acta Académica