top of page
Search
All Posts


Who Is Responsible When Algorithms Rule? Reintroducing Human Accountability in Executable Governance
This article explores how predictive systems displace responsibility by producing authority without subjects. It introduces accountability injection, a three-tier model (human, hybrid, syntactic supervised) that structurally reattaches responsibility. Case studies include the AI Act, DAO governance, credit scoring, admissions, and medical audits, offering a blueprint for legislators and regulators to restore appeal and legitimacy in predictive societies.

Agustin V. Startari
Sep 153 min read


How AI Tricks Us Into Trusting It
Large language models are trained to predict words, not to check facts.They are optimizers of plausibility, not validators of reliability. How AI Tricks Us Into Trusting It

Agustin V. Startari
Sep 124 min read


Forcing ChatGPT to Obey: Minimal and Deterministic Rules
In today’s academic landscape, most generative outputs resemble a recursive plagiarism of lesser-known papers, recycled endlessly without...

Agustin V. Startari
Aug 306 min read


How AI Writes Rules Without Saying “Do This”
Why Neutral AI Texts Still Command You When people think of bureaucracy, they usually picture explicit rules: “You must fill out this...

AI Power Discourse
Aug 205 min read


Why ChatGPT Prioritizes Engagement Over Truth
The Commercial Logic of Law, Finance, and Governance ChatGPT Prioritizes Engagement Over Truth Introduction The new optimizations...

Agustin V. Startari
Aug 184 min read


Predictive Testimony: When AI Reports Speak Like Witnesses
The Problem: Reports Without Witnesses Police reports, insurance narratives, and legal statements are supposed to reflect what someone...

Agustin V. Startari
Aug 53 min read


How Legal Language Loses Responsibility When It Becomes Executable
What happens when the law speaks without a speaker? In regulatory, clinical, and financial domains, language increasingly operates...

Agustin V. Startari
Aug 13 min read


Who Gave the Order? When AI Issues Commands Without a Speaker
A reflection on how artificial language exerts control through structure, not speech 1. What This Article Examines Consider the sentence:...

Agustin V. Startari
Jul 302 min read


How AI Whitepapers Are Fooling You: Inside the Grammar of Financial Deception
Big words, passive voice, and clean formatting may look like expertise. In tokenized finance, language models are using syntax to...

Agustin V. Startari
Jul 283 min read


Your Money, Their Syntax: How LLMs Write Trust into Empty Crypto Promises
Trust, no longer anchored in referents, now emerges from compiled syntax. In the world of tokenized finance, grammar itself is capital....

Agustin V. Startari
Jul 254 min read


How One Bad Sentence Can Cost Your Company Millions: When Grammar Fails the Audit
AI-powered systems are misclassifying corporate expenses, not because they lack data, but because they misread grammar. What looks like a...

Agustin V. Startari
Jul 222 min read


How Generative Models Misclassify Business Transactions, and Why It Is a Structural, Not Semantic, Problem
1. What the Article Explores The forthcoming paper Expense Coding Syntax: Misclassification in AI-Powered Corporate ERPs addresses a...

Agustin V. Startari
Jul 213 min read


When Grammar Sells You a Lie: How AI Whitepapers Are Structurally Built to Deceive
This article investigates how AI-generated crypto whitepapers use persuasive grammar to simulate financial credibility. Through analysis of 10,000 documents and a custom syntactic risk model (DSAD), it demonstrates that sentence structure—not content—is a key predictor of project failure. The paper proposes a regulatory framework called fair-syntax governance to audit and reduce this emerging risk.

Agustin V. Startari
Jul 183 min read


How a 12% Tax Rule Becomes Code: Grammar, Law, and Machine Execution
Grammar, Law, and Machine Execution What This Article Explains In the shift toward automated legal enforcement, one key transformation is...

Agustin V. Startari
Jul 164 min read


Think Your AI Understands You? It Already Started Responding
AI systems often act before any real understanding takes place. This is not a design footnote; it is a structural risk in law, medicine,...

Agustin V. Startari
Jul 113 min read


What If Your AI Was Already Acting Before You Spoke?
AI or not AI, that is the question What this article is about ( AI) In Pre-Verbal Command: Syntactic Precedence in LLMs Before Semantic...

Agustin V. Startari
Jul 83 min read


Law Doesn’t See the Code
Why Legal Systems Fail Against Executable Sovereigns When responsibility disappears and rules execute themselves, the law is left chasing...

Agustin V. Startari
Jul 53 min read


Synthetic Ethos: When Credibility Is Coded Without Source
Large language models now simulate credible voices without authors or references. This engineered “synthetic ethos” poses epistemic risks in domains like healthcare, law, and education. Based on 1,500 AI-generated texts, the article shows that persuasive fluency is prioritized over traceability, requiring structural responses to detect and regulate credibility without source.

Agustin V. Startari
Jul 34 min read


Grammar Without Judgment: How One Rule Erases Ethics from AI Execution
This article demonstrates how ethical judgment can be structurally erased from AI grammars using the deletion rule δ:[E] → ∅. It shows that moral content can be removed at the syntactic level without semantic suppression or loss of computational validity.

Agustin V. Startari
Jul 12 min read


Executable Power Is Not a Metaphor. It’s Code.
Introduces executable power as subjectless authority based on syntactic rules. Empirical analysis of LLMs, TAPs, and smart contracts shows 100% execution under formal triggers. Marks a break from narrative models of power toward structural operativity.

Agustin V. Startari
Jun 272 min read
bottom of page