The "Slot Machine" and the Shield: A Reflection on Tools, Survival, and Autonomy
It began with a comment: a fellow disabled person calling my use of AI "disgusting" and "grift-tech," as if the digital tools that help me think are a betrayal rather than a lifeline. To them, prompting an AI is like pulling a slot lever—a mindless act of consumption. But for me, an autistic and schizophrenic student navigating a world not built for minds like mine, it is a deliberate choice. It is a way to bridge the gap between chaotic thought and coherent expression. This essay is not a defence of Big Tech or a blind endorsement of AI—it is a reflection on the right to adapt and use whatever tools we need to stand on our own in the Information Age.
I. The Sting of Friendly Fire
It started with a notification. A user with the .exe extension decided to criticise my choices harshly. They didn't attack me with technical exploits; they used words like "disgusting," "grift-tech," and "pick me."
They identified themselves as a "fellow disabled person." And then, in the same breath, they criticised me for using the very tools that allow me to navigate a world that wasn't built for either of us.
This is what sociologists call "lateral violence"—when oppressed groups turn their frustration on each other instead of the systems that oppress them. It hurts more than the standard ableism I face offline in Sabah. When an able-bodied person judges me, I expect it. But when someone who claims to share my struggle tells me that my survival strategy is an act of theft, it feels discouraging. It feels like I am being judged for finding a way to keep my head above water.
Their argument was passionate but fundamentally misunderstood my reality. They claimed that using Generative AI is like "pulling a slot lever" and seeing what the machine spits out. They argued that because I use AI, I am merely a spectator to my own thoughts.
They are wrong. And to explain why, I need to talk about what it actually means to function as a neurodivergent person today.
II. The "Slot Machine" Fallacy vs. The Digital Prosthetic
Let’s address the "slot machine" accusation directly. The critic believes that an LLM (Large Language Model) is just a random generator that spits out text without soul.
If I were using AI to churn out generic content for profit, they might have a point. But I am an IT student trying to organise my thoughts. My brain does not always sequence ideas in the linear, polite way that society demands. I experience "brain fog" or executive dysfunction. I have the ideas—the raw data and the intent—but the bridge between my mind and the screen is often broken.
AI is not a slot machine for me. It is a digital prosthetic.
When a person with a physical disability uses a powered exoskeleton or a wheelchair, we do not accuse them of "faking" their movement. We recognise that the intent to move came from the human, and the machine simply provided the assistance to overcome gravity.
When I prompt an AI, I am providing the intent. I am guiding it. I am saying, "Here is my raw thought—please help me structure it so the world understands me." When the AI returns a polished sentence, it hasn't "created" the idea; it has "rendered" it. To tell me that this process is "lazy" is to misunderstand the effort it takes just to show up.
III. Micro-Utility vs. Macro-Economics
The anger of the .exe user—and the anger often seen in online communities like lemmy.world—comes from a difference in focus. They are focused on Macro-Economics. I am focused on Micro-Utility.
Their Focus (The Macro):
They see OpenAI, Google, and Elon Musk. They see corporations harvesting data and devaluing human capital. They dislike the system, and so they criticise the user.
My Reality (The Micro):
I see a Linux terminal on my obulou.org server that I need to configure. I see a complex Docker Compose file that keeps crashing. I see tools—Micro-Utilities—that help me solve these immediate, tangible problems.
The critic often cannot separate the tool from the provider. They believe that if I use the tool, I must support the provider's ideology. This is a binary worldview that lacks nuance.
I blocked Elon Musk on X. I disagree with his philosophy. Yet, I use Grok because it cites sources in a way that helps my research. I am capable of "filtering." I can extract the utility of the AI while rejecting the ideology of its owner.
This is the essence of Digital Sovereignty. I am not a passive consumer swallowing whatever the algorithm feeds me. I am an active filter. I take what serves my independence and discard the rest.
IV. History Repeats: The "Anti-Camera" of 2025
The irony of this criticism is that it is not new.
In the mid-19th century, when the camera was popularised, many artists panicked. The poet Charles Baudelaire called photography "art's most mortal enemy," arguing that because a machine captured the light, the resulting image had no soul.
Does that sound familiar?
"You are just pulling a slot lever." (2025)
"You are just pushing a button." (1850)
Later, people used cameras to take photos of their drawings to post on social media. They layered technology on top of technology to connect. Today, we don't see the camera as a threat to drawing; we see it as a different medium.
We saw it again with digital art. "You used the Undo button; that's cheating." Now, the target is AI. But I believe that in time, using an LLM to structure your thoughts will be seen as normal as using a spell-checker. The people screaming "Grift-Tech" today are repeating a cycle of fear that happens with every major technological shift.
V. Information Literacy: The New Survival Skill
The world has moved on from simple "Reading and Writing" Literacy. We are now in the era of Information Literacy.
The test is no longer "how much can you memorise?" The test is: "How efficiently can you filter, verify, and apply information to solve your problem?"
I self-host a YunoHost/Docker ecosystem. If I tried to learn every single line of documentation manually "from the beginning," I would never get anything running. Instead, I use AI to "cache" intelligence. I perform the trial and error—the human work—and then I have the AI document the solution.
This is not "cheating"; this is efficiency. It is the only way to keep up in an environment where information overload is the norm.
VI. A Different Path
The .exe user accused me of being a "pick me"—someone seeking validation from the tech elite. This reveals that they view this conversation as a popularity contest.
I am not interested in popularity. I am interested in Autonomy.
I don't want to be part of the "Pro-AI" herd.
But I also refuse to be bullied by the "Anti-AI" herd.
I am asserting a third option: the right to define my own tools. I am asserting the right to say, "This technology helps me, and I will use it."
My intent is to survive. My intent is to learn. My intent is to stand on my own two feet in Sabah. If AI helps me do that, then I will use it. And if I choose not to use it, that is also my right.
We are not the same, but that is okay. I choose to be the pilot of my own digital life. That is the protocol I live by.
Edit 1: Beyond the Binary—A Protocol for Autonomy
VII. The "Content Creation" Trap
I have realised that the labels "Pro-AI" and "Anti-AI" are often just tools for other people's content strategies. Social media thrives on conflict. To make a post go viral, people need to create a simple "us vs. them" narrative. They might label me "Pro-AI" to fit me into a box that generates engagement for them.
But these are just artificial labels—stigma and discrimination repackaged as internet categories.
In my case, it is not about picking a side in a culture war. It is about being true to myself. My use of AI is not a political statement; it is an assertion of my autonomy.
VIII. The Right to Choose (and to Refuse)
True autonomy is flexible. It is not blind loyalty.
I assert my right to use AI when it helps me communicate or work.
But equally, I assert my right to not use it, or to be skeptical of it, when it fails to meet ethical standards.
For example, no one likes biased answers that back up oppressors or blame victims—whether it’s regarding the situation in Palestine or other injustices. If an AI generates content that supports oppression, I assert my right to challenge it and reject it. I don't surrender my critical thinking to the machine.
This is the core of my activism: it is a protocol, not a protest. I am not trying to force anyone to do anything. I am simply demonstrating that we have the freedom to choose our tools based on the situation, without being harassed by a mob.
IX. Skepticism in Action: The DeepSeek Example
This autonomy also means staying informed about the tools we use.
For instance, I have followed reports from media and security researchers regarding DeepSeek and its CCP-influenced limitations. These reports highlighted how, when asked about sensitive topics like the Uyghurs, the model’s censorship protocols could lead it to generate less secure code.
I didn’t conduct this research myself—I read it, verified it against my values, and made a decision. This is how we protect ourselves: not by blindly trusting "open source" claims, but by listening to the community and acting on that information. I use the tool, but I know its limits. I assert my safety over its convenience.
X. Decentralised Unity
My stance is simple: just as people assert their right to freedom of religion without harassment, or just as Sarawak (led by GPS) asserts its autonomy from federal exploitation and bias, I assert my autonomy in the digital space.
I am not asking to be a leader or a "strongman." A strongman is a single point of failure—easily taken down or demoralised. Instead, I hope to inspire others through a protocol approach.
If my assertion of freedom inspires you to assert yours—whether you choose to use AI, reject it, or audit it—then we have achieved a form of decentralised unity. We don't have to agree on everything, but we should agree on our right to exist and choose for ourselves without fear of judgment.
That is the protocol. That is me, Kalvin.
Edit 2: The Cycle of Contribution—Why It’s Our Data
XI. The "Stolen Data" Myth vs. The Contributor Reality
There is a loud argument online that AI is built on "stolen" data. Critics say that Big Tech scraped the internet without consent, and therefore, using the tool is an act of theft.
I have thought deeply about this, and I see it differently.
It is true that AI models are trained on publicly available data. But who created that data? We did. The people.
Every time I posted a technical question on a forum, shared a code snippet, wrote a blog post, or engaged in a public discussion, I was contributing to the digital commons.
It isn’t just "their" data (the artists or the famous authors) that trained these models. It is my data, too. It is the collective intelligence of millions of ordinary users who built the internet, post by post, year after year.
XII. Reciprocal Value: Reclaiming What We Built
Realising this brought me a sense of relief. I am not a passive consumer or a "thief" for using AI. I am a contributor.
When I prompt an AI to help me debug a Docker container or structure a difficult email, I am not stealing. I am asking the system to return value to me—value that is derived from the massive pool of information that I (and people like me) helped create.
- The Input: We populate the web with human knowledge, creativity, and solutions.
- The Output: The AI synthesises that knowledge and offers it back to us as a tool.
This is a two-way street. By using AI, I am simply withdrawing from the "public library" of human thought that I helped stock. If my data helped train the model, then I have a moral right to use that model to improve my life.
Question: "What does this mean?"
Kagi Assistant: "The statement means that AI development is a shared effort. When people contribute their data—such as writings, posts, or interactions—they help build the 'library' of knowledge that trains AI models. Therefore, those individuals have a moral right to use the AI in return, just like borrowing from a public library they helped create. It highlights fairness: if your input helped shape the AI, you should be able to benefit from it to improve your own life. This view supports open and ethical access to AI as a collective human resource."
XIII. The Cycle Continues
This perspective reinforces my Micro-Utility argument.
If I view the AI as a tool that condenses our collective knowledge, then refusing to use it feels like denying myself access to my own heritage.
Furthermore, when I use AI to help me write clearer documentation or build more secure self-hosted services, I am creating better data. I then publish that better data back to the internet (on my blog or forums), which helps the next person—and potentially trains the next generation of tools.
It is a cycle. I contribute, I use the tool to refine my work, and I contribute again.
So, when people accuse me of using "stolen" tech, I tell them: "I am not stealing. I am reclaiming the value of my own digital contribution." I am using the community's tools to remain a productive, independent part of that community.
That is the protocol. That is the balance. That is me, Kalvin.