Jade Zhao
Informatics at Indiana University
i think a lot about how things break. not in a pessimistic way - in a curious one. most problems are just unclear instructions in disguise.
A year-by-year look
Freshman Year
2023-2024
"the year i learned that skipping steps always costs you later."
First real code. First community. ServeIT begins.
Sophomore Year
2024-2025
"the year i started teaching and realized i was still learning."
Research deepened. Mentoring became a system.
Junior Year
2025-2026
"the year i took everything i knew somewhere completely new."
Madrid. Health informatics. High-stakes clarity.
Senior Year
2026-2027
"the year it stops being about building and starts being about passing it forward."
Capstone focus. Transfer quality. Leave it usable.
Opus Selectum (Selected Work)
a few things worth looking at. each one is evidence of that year's method.
The PB Sandwich Lesson
a professor stands in front of the class and says: write me instructions for making a peanut butter sandwich.
the class writes things like "spread peanut butter on the bread."
the professor follows exactly what they wrote. opens nothing. assumes nothing. spreads nothing.
the class realizes - their instructions were full of gaps they didn't even notice.
computers are that professor. they do exactly what you tell them. not what you meant. not what any reasonable person would assume. exactly what you said, in exactly the order you said it.
the same logic applies to prompting AI, writing CSS, giving feedback, designing systems, explaining yourself to another person. if someone can't follow your instructions without guessing - the instructions aren't done yet.
- understand what you're working with before you start
- do things in the right order
- say your assumptions out loud
- test the edges before you ship
- treat failure as information, not verdict
it's a humbling exercise. most people think they're being clear. the sandwich proves otherwise.
AI Transparency
i use AI tools - Cursor, Copilot, agentic assistants.
i'm telling you because not telling you would be a form of unclear instruction - and that's the one thing this whole site is arguing against.
the drafts are assisted. the thinking, the architecture, the final calls - those are mine.
draft generation is AI-assisted. accountability is not.