Agents in Windows

Solving trust and visibility for AI agents in the operating system

OS shellSystems thinkingAmbient AISenior Designer2025
Agents in Windows, Researcher agent showing multi-step task progress on the desktop

Overview

Role
Senior UX Designer
Platform
Windows Shell
Focus
Agent visibility, trust, and OS integration
Challenge
Users welcomed assistance but disliked surprises
Solution
Taskbar presence, hover summaries, unified invocation
Impact
Agents became visible, interruptible, and integrated in the OS
Context

Windows is evolving into a canvas for AI agents

Microsoft leadership, including Satya Nadella, has articulated a clear shift in how AI will transform knowledge work. The future is not about replacing people outright. It is about agents taking on routine or time-consuming tasks so humans can focus on higher-value thinking and collaboration.

That vision places Windows in a new role. It is no longer just a platform for launching apps. It becomes the environment where AI agents are deployed, monitored, secured, and orchestrated. The opportunity is to make Windows the best place for developers and enterprise workers to run intelligent agents, from personal productivity helpers to broader organizational automation.

This work was not about embedding Copilot into Windows as a feature. It was about establishing Windows as the canvas where agents become first-class citizens of the operating system.

Agents as first-class OS citizens
Problem

People do not object to help. They object to not knowing what is happening.

As AI agents began to take on more responsibility, their presence across the system became fragmented. Some lived inside app experiences. Some surfaced through Copilot conversations. Others relied on notifications. Users were left to piece together what was happening and where.

Early feedback and testing revealed a consistent pattern. Users welcomed assistance. They did not welcome surprises. They wanted situational awareness. They wanted to know what was running, what was complete, and what required attention.

In other words, people did not fear automation. They feared invisibility.

This insight became the narrative spine of the work. Clarity creates trust. If agents were going to act on a user's behalf, they needed to be visible and understandable at the system level.

Process

Making agents first-class citizens of the OS

Agents behave differently than traditional applications. They can run in the background, interact with files and system resources, and take longer than a typical user action to complete. They are not always tied to a single window.

We explored multiple structural models. Should agents live inside the apps that invoke them. Should they be pinned independently. Should they have a dedicated panel or dashboard. Each option had tradeoffs. Some approaches made agents too heavy. Others made them too hidden.

For the discretion of this project, I cannot share many of the early explorations in detail, as the system continues to evolve. What I can share is the tension. We were balancing a long-term vision for agent-native operating systems with the reality that Windows is used by hundreds of millions of people. We cannot shift patterns abruptly. We have to evolve them thoughtfully.

The solution was to extend existing OS contracts rather than inventing entirely new ones. Agents would behave like apps in the ways users already understand, while gaining additional visibility and state awareness specific to their behavior.

Agents on the Taskbar

One of the most important shifts was giving agents a persistent presence on the taskbar. After initiating an agent task, whether from Microsoft 365 Copilot or from Ask Copilot on the taskbar, the agent appears as an icon just like a regular application. That familiarity matters.

From there, status badging communicates what the agent is doing. Hover cards reveal contextual information about progress and provide lightweight controls. Users can monitor activity without opening a full interface or losing their place in their current work.

We defined a clear and consistent state model. Idle indicates the agent is waiting for input. Active signals that work is in progress. Needs attention communicates that user intervention is required. Complete indicates that the final artifact or outcome is ready.

Each state uses minimal visual cues. Nothing decorative. Nothing flashy. The goal was legibility first and decoration second. Users should be able to glance at the taskbar and understand what is happening immediately.

This approach turned the taskbar into more than a launcher. It became a dynamic control surface for agent activity. Users can monitor, intervene, and retrieve completed work directly from the shell rather than hunting through conversations or buried notifications.

Taskbar presence and observability of agent activities
Taskbar presence and observability of agent activities
Unified Invocation

Ask Copilot as a system-wide entry point

In parallel, we helped shape a unified invocation model through Ask Copilot on the taskbar. This composer allows users to launch agents through text or voice from anywhere in the OS. By typing the at symbol, users can directly reference specific agents.

This unified entry point reinforces a consistent mental model. Agents are not confined to individual apps. They are accessible from the system layer. Invocation and monitoring both live within predictable, centralized surfaces.

Universal agent invocation from the taskbar
Universal agent invocation from the taskbar
Iteration

Cross-discipline collaboration shaped every decision

This work required tight collaboration across design, engineering, and product. We prototyped invocation flows, mapped agent states to OS visuals, and tested interaction patterns under realistic workloads. We iterated on how agents surfaced in different contexts, how they were grouped, and how much information to reveal at once.

Some early concepts relied only on taskbar badges. Others experimented with persistent side panels or heavier notification patterns. In usability testing, users consistently preferred lightweight, in-context visibility that did not interrupt their primary task.

We also invested significant effort in defining state transitions and badge behavior. How does an agent signal completion. How does it request help without feeling alarming. How does a user unblock an agent without losing context. These details may seem small, but they shape trust over time.

Because the project continues to evolve, not all iterations can be shown publicly. What matters is the underlying principle. Agents must feel integrated into the system in a way that is predictable and familiar.

Impact
Agents on Windows taskbar, announced at Microsoft Build
Agents on Windows taskbar, announced at Microsoft Build

From opaque automation to collaborative assistance

Today, agents in Windows feel visible and understandable. They are discoverable at a glance and interruptible when needed. Completed work does not disappear into hidden surfaces. It remains accessible and contextual.

This shift has significant implications for trust. When users can see automation working on their behalf, they are more comfortable delegating tasks. When they can pause or stop an agent easily, they feel in control.

  • Visibility unlocks trust. Agents should never feel hidden if they are acting for you.
  • Control builds confidence. Pause and stop affordances matter.
  • Integration must be seamless and predictable.
  • Agents should feel like tools you choose to use, not black boxes you are expected to accept.