AI Transparency
Cursor, Copilot, Agentic Assistants
i use AI tools and i'm telling you because not telling you would be a form of unclear instruction - and that's the one thing this whole site is arguing against.
How I Use AI
i use Cursor, Copilot, and agentic assistants.
they increase how fast i can iterate. they do not replace how i think, what i decide, or what i sign off on.
the distinction matters. that's why i'm writing it down.
Five Steps
- prompts include constraints and expected outputs. vague inputs produce unstable outputs. that's the sandwich.
- AI proposes. i evaluate. not the other way around.
- every output gets checked for logic, fit, and edge cases before it moves.
- major decisions get documented so i can explain them later.
- nothing ships without a human reading it and saying yes.
When It Goes Wrong
AI gets it wrong more than people expect.
it sounds confident when it's guessing. it fills gaps with plausible-sounding things that aren't true. it optimizes for fluency, not accuracy.
when i catch it, and i do catch it, i don't just fix the output. i go back to the prompt. something in the instruction created room for the wrong answer.
that's the sandwich again. the error is a missing step. find the step. fix the instruction. run it again.
Closing
transparency isn't a policy here. it's an instruction with no missing steps.
if you can't see how the work was made, you can't fully trust what it does.
that's the whole argument. 🌷