How I Use AI
I use AI to support complex writing and creative work, especially where disability impacts process. I use it carefully and draw clear boundaries around what I will and won’t use it for. This page exists to make my use of LLMs transparent and as a political statement.
AI is built on extraction. I use it anyway, and my use should be named clearly and in public.
I work with large language models like ChatGPT to support creative, technical, and strategic work. That includes drafting language, editing documents, sorting ideas, naming projects or systems, and reflecting on the framing or direction of the work itself.
For me, this use is both practical and political.
What I Use It For
I use text-based AI to:
- Draft and revise long-form content, typically the second draft.
- Build outlines and checklists
- Name projects, concepts, and systems
- Identify patterns across ideas, documents, or time
- Support task momentum when executive dysfunction or anxiety interfere with flow
- Reduce friction around writing that’s historically been difficult (especially persuasive or performance-based content)
- Write or refine alt-text using image recognition tools, where it supports accessibility
I live with long-term executive dysfunction from a traumatic brain injury over twenty years ago. I’ve also had a complicated relationship with writing, especially in contexts where how I said it mattered more than what I meant. That history shaped patterns of avoidance. Working with AI supports my cognitive process, not by doing the thinking for me, but by giving structure to where I get stuck. It gives me access to parts of the work I have difficulty reaching without support.
Boundaries of Use
- No AI-generated visuals or "artwork" in my public work or play
This is a principled refusal in solidarity with artists whose labor and styles have been extracted without consent, and in recognition of the disproportionate carbon cost of generative image systems. - No AI in decision-making about or for others
Tools that carry bias, opacity, and systemic training data should never mediate harm, access, or accountability. I don’t delegate judgment or decision making. - No sensitive personal or community data
These tools were not built with privacy in mind. If something’s sensitive or carries risk for others, I don’t put it into systems I don’t control. - No publishing raw output
Every AI-shaped phrase I share publicly is rewritten, reviewed, and held to the same standard as anything else I put my name on. If I share something directly generated through a model, I say so and explain how it was generated. - Search or research
How I Think About It
I use AI with full awareness that these systems are extractive (socially, economically, environmentally, and from how knowledge is built). They weren’t built with safety, autonomy, or collective interest in mind, only corporate ones.
I stay close to these tools, not out of trust, but out of vigilance. I want to understand how they work (and how they’re being woven into larger structures of control and power) so I can help resist their worst uses and build toward something better.
🚧 (section on risk mitigation and political resistance, ✍️ writing in progress) 🚧
To mitigate harms I eventually plan to run open-source models locally on a solar-powered system. Until then, I use what’s available with care, within the limits of what I can ethically carry.
This page may evolve as the tools do. My commitments and need for personal accountability won’t.
(Also, yes. I used AI to help write an early draft of this page. That feels like something you should know.)