top of page
Search


Who Is Responsible When Algorithms Rule? Reintroducing Human Accountability in Executable Governance
This article explores how predictive systems displace responsibility by producing authority without subjects. It introduces accountability injection, a three-tier model (human, hybrid, syntactic supervised) that structurally reattaches responsibility. Case studies include the AI Act, DAO governance, credit scoring, admissions, and medical audits, offering a blueprint for legislators and regulators to restore appeal and legitimacy in predictive societies.

Agustin V. Startari
Sep 153 min read


Credibility Without a Human: How AI Fakes Authority and Why It Works
This post explores how large language models simulate credibility through grammar, not truth. It introduces the concept of synthetic ethos, presents empirical findings across healthcare, law, and education, and proposes a structural detection framework to confront AI-generated authority without origin.

Agustin V. Startari
Jun 203 min read
bottom of page