Designing AI Systems People Trust 

Dates: July 25’ - Present
Roles held: UX Manager of AFT Operator Experience. This includes labor planning, package flow, and environmental health and safety product experiences.

 

Overview:
As AI-driven recommendations became increasingly embedded in fulfillment workflows, operators were asked to rely on automated systems to make time-sensitive, high-impact decisions. These systems influenced planning, prioritization, risk assessment, and operational execution across hundreds of fulfillment facilities.

While the underlying models continued to improve, adoption and trust lagged behind capability. Operators hesitated to follow recommendations, frequently overrode automated outputs, or sought secondary confirmation before acting. In some cases, teams disengaged from AI-assisted workflows entirely.

- Designing for Trust, Explainability, and Human Judgment -

Rather than treating low adoption as a training or change-management issue, I reframed trust as a design responsibility. For AI to function effectively in operational contexts, users needed more than recommendations — they needed context, transparency, and clear signals about how to incorporate automation into their decision-making.

I helped support the development of a systematic approach to AI explainability and trust, grounded in real operational workflows. This work focused on defining when explainability was required, what level of transparency was appropriate, and how explanations should be presented without overwhelming users.

A key leadership decision was distinguishing between principles and patterns. We established high-level guidance that articulated why trust mattered and how AI should support human judgment, paired with reusable interaction patterns teams could apply consistently across tools. This allowed explainability to scale without requiring every team to reinvent solutions or make ad hoc design calls.

Throughout this work, the emphasis remained on human-in-the-loop design. AI was positioned as a decision-support partner rather than an authority. Designs surfaced confidence signals, limitations, and contributing factors, enabling operators to understand why a recommendation existed and how to act on it responsibly.

Results & Impact

As explainability patterns and trust guidelines were applied, operators gained clearer insight into system behavior and decision rationale. Hesitation and override behavior decreased as users developed confidence in when and how to rely on automation. Adoption increased not because users were told to trust AI, but because the systems earned that trust through clarity and predictability.

At an organizational level, this work established a shared language for AI trust across teams, reducing inconsistency and uncertainty in how AI features were designed and evaluated. Product and engineering partners gained clearer expectations, and UX played a central role in shaping AI experiences rather than reacting to them after the fact.

Most importantly, AI systems became more effective in the environments they were built for. By designing for transparency and judgment, automation augmented human decision-making instead of undermining it — improving operational confidence in systems used daily across complex, high-risk workflows.

Previous
Previous

Scaling UX Through Operating Models (Not Headcount)

Next
Next

Mental Model Mapping & Jobs Theory