5 Hacks 99% of AI Users Don’t Use (But I Do and you should)
- Agustin V. Startari
- 2 days ago
- 4 min read
You don’t need more prompts. You need less illusion.
Introduction: The Myth of the Average User
Most articles on “how to use AI” are mildly reworded copies of the same formula: superficial prompt engineering, predictable command lists, and a quasi-religious praise of tools. But there’s a difference between using AI as an occasional assistant and using it as a structural extension of thought.
That difference is epistemological, not technical.
I don’t use AI as an oracle. I use it as a system of verification, confrontation, and structural extraction. And that changes everything.
These are 5 hacks — or rather, 5 deliberate deviations from standard use — that completely alter the potential of the interaction.
1. I Never Look for Answers — I Format the Problem Conditions
Most people show up with questions. I show up with structures.
The first mistake of the average user is to assume AI should fill a knowledge gap. What I do instead is define the formal conditions of a problem as if I were constructing a theorem or a logical framework.
I don’t ask: “What does Foucault think about authority?”
I define: “Develop a system of categories on non-agentive self-referential discourse in normative structures, based on the Foucauldian model of power as a network.”
Result? The AI doesn’t guess. It operates.
2. I Don’t Use Prompts. I Use Protocols
The second hack is crucial: I entirely replace prompts with permanent protocols.
This means I define rules of interaction, not isolated requests. For example:
– Never complete unsolicited sentences
— Don’t interpret intentions
— Conflict softening is prohibited
— Verifiable data takes precedence over linguistic politeness
This eliminates most interpretation errors. I don’t have a “conversation” with AI — I run a discursive operating system under controlled syntax.
I don’t need a chatbot. I need a logic machine with language.
3. I Use Contextual Memory — But Monitor It Like a System File
Third principle: memory is an operational database, not an emotional agenda.
Most users ignore that the system accumulates traces, biases, inferences, and associations over time.
What I do is conduct regular audits of the “technical state of the channel”: I assess what it knows, what it misinfers, what it repeats by association.
If an AI starts “assuming” too much, it’s not helping. It’s contaminating.
4. I Don’t Ask for Help — I Demand Structured Contradiction
Many users seek validation. I seek friction.
One of the most effective hacks I apply is to ask the model to contradict my hypotheses using structurally falsifiable data — not opinions.
This forces the model to abandon passivity and build autonomous logic.
I write:
“Contradict this statement with a three-level argumentative chain: empirical data, theoretical framework, and operational counterexample.”
Result? The AI doesn’t reaffirm. It confronts. And that reveals structures the average user never sees.
5. I Don’t Use AI to Write — I Use It to Test If I’m Thinking Straight
99% want AI to write for them. I want AI to reveal whether what I’m writing holds structural coherence.
I give it my text. I ask it to attack it like a reviewer would — to find fallacies, syntactic repetition, internal contradictions.
I don’t need a machine to draft. I need one to question.
I don’t need production. I need automated dissent.
Additional hack: I test the Chat / Channel structural verification
How? I just ask the AI to do it, ‘Check CHAT ACTIVE STRUCTURAL VERIFICATION’ This allows me to control the structure of the chat and also know if it is overflowding with info what is going to lead to a possible error from the IA in the future.
Chat Technical Verification


Conclusion: What Changes Is Not the Tool — It’s the Subject Who Activates It
The difference between an average user and a radical author is not in the prompt — it’s in the epistemology.
If you treat AI like a text generator, it will give you text.
If you treat it like a structure operating system, it will give you schemas, tensions, inconsistencies, and blind spots.
I don’t use AI to know more. I use it to detect when I’m wrong.
In a world saturated with validated information, that is the most dangerous hack of all.
About the Author
Agustín V. Startari (ORCID: 0009–0001–4714–6539) Linguist, researcher and author with academic training in Linguistics, Historical Sciences and an MBA in Strategic Management. His work explores the intersection of language, artificial intelligence, epistemology and structural authority. He has published on Zenodo, SSRN, Academia.edu, and other scholarly platforms, including:
– AI and Syntactic Sovereignty: How Artificial Language Structures Legitimize Non-Human Authority
— The Illusion of Objectivity: How Language Constructs Authority
— The Future as Origin: Towards the Core of Being
— Colonization of Time: How Predictive Models Replace the Future as a Social Structure
ETHOS
“I do not use artificial intelligence to write what I don’t know. I use it to challenge what I do. I write to reclaim the voice in an age of automated neutrality. My work is not outsourced. It is authored.”
— Agustin V. Startari
Explore More
To access more of my research, you can follow my publications here:
– Zenodo: https://zenodo.org/
— SSRN: https://ssrn.com/
If you’re not just looking for prompts — but for structural dissent — start here.
Comments