Serial problem solver.
Fixated on optimizing systems.
Obsessed with structure and flow.
Unapologetic technology geek.
So naturally, I fell in love with UX design.
I'm a product designer working on AI experiences inside Windows at Microsoft. My job sits at a strange and exciting intersection: operating systems, agents, enterprise workflows, trust, orchestration. All the messy, high-stakes problems that show up when software stops being a tool and starts acting on your behalf.
The more honest version is that I'm wired to solve problems. I don't really know how to turn it off. If something feels inefficient, structurally off, or just vaguely wrong, I will keep pulling on it until I understand it. Sometimes that means building prototypes late at night. Sometimes it means mapping systems no one asked me to map. Sometimes it means going ten layers deeper than necessary just to make sure the foundation is solid.
I have a habit of obsessively researching the "right" solution to things most people would settle on in an afternoon. I like constraints. I like tradeoffs. I like figuring out what the real problem is underneath the surface problem.
If you ask my family, they'll probably tell you something similar. I'm always researching and trying to find the most efficient ways to do things, even outside of work.
Design, for me, is structure. It's the quiet architecture that makes complexity feel obvious. I care about progressive disclosure. I care about removing noise. I care about building systems that scale without collapsing under their own weight. Good design should feel inevitable in hindsight.
In the AI space especially, I think we're in a moment where clarity matters more than novelty. Agents that interrupt at the wrong time. Automation that feels unpredictable. Interfaces that overpromise and under-explain. Those are structural problems, not styling problems. The work I care about is building the guardrails and interaction patterns that make intelligence feel reliable and human.
I'm not trying to design isolated features. I'm trying to design how systems behave over time. How they learn. How they earn trust. How they stay in flow instead of fragmenting it.
I'm ambitious about the impact of the work, but I'm pretty grounded about the craft. I still sweat alignment, hierarchy, edge cases, and empty states. The details matter because they're where trust is won or lost.
If you're building something that sits at the edge of what software can do, especially where AI and real human workflows meet, that's where I'm most energized. I like hard problems. I like unclear terrain. And I like turning chaos into something that feels simple and durable.
Microsoft
Windows Agent Platform
2025 – Present
Copilot Actions
2025 – Present
Windows Recall (AI-Powered Memory Search)
2023 – 2025
File Explorer Modernization (Windows 11)
2022 – 2023
Teams for Education
2020 – 2021
Microsoft Education
2018 – 2020
Amazon
Contract
Nordstrom
Contract
- Novel interaction paradigms with no existing mental model
- Structure that must emerge from ambiguity
- Problems where trust determines adoption
- Systems thinking at enterprise and OS scale
- Progressive disclosure and calm complexity
If you're working on something complex like AI systems, enterprise tooling, or OS-level interaction and you want it to feel calm, intentional, and trustworthy, I'd love to hear about it.