✦ Case Study — Lead UX · AI-Native Redesign

ServiceONE — Redesigning a legacy CRM for the era of agentic intelligence

ServiceONE (UAE)
Elevator Field Service
Lead UX Designer
5 Designers
In-Progress · 2025–2026
LLM · ML · Agentic Flows

A platform that was reactive by design

162+
Open service calls at any given moment
85%
Target SLA closure rate
40+
Static report templates operators navigated
5
Design team members I lead end-to-end

ServiceONE is a mission-critical field service CRM used by elevator maintenance teams across the UAE to handle thousands of service callbacks, dispatch mechanics, manage multi-building contracts, and ensure every trapped passenger gets help within SLA. The existing platform was functional but exhausting — a dense, form-heavy interface that forced experienced operators to work against the software rather than with it.

I was brought in as Lead UX Designer to direct a complete product redesign — not just a visual refresh, but a fundamental rethinking of how intelligence flows through a service management platform. My mandate: embed AI, machine learning, and agentic automation so deeply into the UX that the system begins to surface priorities, recommend actions, and complete tedious workflows on behalf of the user.

This case study documents the design strategy, AI integration philosophy, key decisions, and outcomes from a year-long engagement that is actively reshaping how a team of dozens of field service operators experiences their work every day.

01 · Problem

Operators were drowning in information — blind to what matters most

🔍
No Priority Intelligence
The system listed 162 open calls in chronological order. Determining which to act on first required mental effort and institutional knowledge — not logic the system could support.
📋
Form-Driven, Not Flow-Driven
Registering a callback required navigating 8-step forms with no auto-fill, no smart suggestions, and no awareness of similar open tickets in the same building or unit.
🔄
Dispatch Was Manual Guesswork
Supervisors manually matched mechanics to jobs based on memory. No system-assisted proximity analysis, no skill-matching, no workload balancing.
📊
40+ Static Reports
Every report was a pre-built template. Operators who needed a cross-report view had to export to Excel and build it themselves — daily.
SLA Breaches Discovered Late
SLA countdown timers were visible only on the detail screen. By the time an operator noticed a breach risk, escalation was already inevitable.
🏢
Context Fragmented by Building
Multiple open faults in the same building (e.g., Burj Khalifa) weren't surfaced together. Dispatching a mechanic once while 3 other issues remained was common.
The system tells me everything that's happening. It just never tells me what I should do about it first.
— Senior Field Service Operator, Research Interview · Dubai, 2024
02 · AI Strategy

Designing an intelligent layer — not just smart features

The strategic insight that guided every design decision: AI shouldn't be a feature you go to — it should be the substrate the entire experience runs on. We structured our AI integration across five interoperating layers, each contributing to a system that gets smarter the more it's used.

01
Priority Intelligence Engine
ML model scores every open call based on SLA proximity, complaint severity, building tier, trapped status, and historical resolution data. The dashboard reorders itself in real-time around urgency — not data entry order.
ML Scoring
02
Agentic Dispatch Recommender
When a high-priority call is flagged, the system automatically evaluates available mechanics by proximity, certification, current job load, and past performance at the specific unit — then surfaces a dispatch recommendation with confidence score.
Agentic AI
03
LLM-Powered Form Automation
Callback registration uses LLM to pre-fill fields from historical data, unit records, and customer complaint text. What was an 8-step form is now a 2-confirm flow for returning buildings. New complaints are classified and tagged automatically.
LLM
04
Conversational Report Interface
Natural language report queries replace 40+ static templates. Operators ask "Show me all SLA breaches in Abu Dhabi this month by building" and receive a structured, visual, exportable report — generated and explained in seconds.
NLP · LLM
05
Predictive SLA Alerting
Rather than alerting on breach, the system predicts breach probability 45 minutes out based on current queue depth, mechanic availability, and traffic conditions. Supervisors are notified before the window closes — not after.
Predictive ML
03 · Design · AI Dashboard

From data list to decision surface

The Callback Dashboard is the operational heartbeat of the platform. The redesign transformed it from a sortable table into an AI-driven decision surface — where every element is weighted by urgency, every row carries contextual intelligence, and the system actively guides the operator's attention.

ServiceONE — Callback Dashboard (Redesign)
AI Active · Priority Mode
Core
🏠 Home
📞 Callbacks
📋 Contracts
🏢 Buildings
Field
🗺 Dispatch
📅 Scheduling
👷 Mechanics
Intelligence
✦ AI Insights
📊 Reports
AI Priority Alert — Burj Khalifa Cluster
3 active faults detected in same building. Mechanic #4 (Ahmed Al-Farsi) is 4 min away and certified for all fault types. Dispatching now would resolve 3 tickets and save est. 2.1 SLA hours.
Dispatch Mechanic #4 →
Review First
162
Open Calls
↑ 2.4% this week
12
Trapped Calls
↑ 3 urgent
85.2%
SLA Closure Rate
↑ Target met
94%
AI Rec. Accuracy
↑ AI Active
Unit No Priority SLA Trapped Reach By Complaint Building AI Rec.
124463 HIGH 45 Min Yes 28 Min ⚠ Elevator Stopped Burj Khalifa Dispatch #4 →
124463 HIGH 32 Min Yes 20 Min Access Card Issue Burj Khalifa Bundle w/ above
124463 MED 23 Min Yes 18 Min Alarm Not Working Burj Khalifa Bundle w/ above
198821 LOW 18 Min No 18 Min Arrival Gong Fault DIFC Gate Assign #7 →
203341 NONE 12 Min No 10 Min Cabin Light Diffuser Marina Mall Schedule later
04 · Design Decisions

Every decision had a why rooted in operator reality

Before · Legacy
8-Step Callback Registration Form
Operators filled in all 8 fields manually for every new callback — unit number, building, complaint type, contact info, priority, SLA bracket, assignment, notes. No field was pre-filled, no suggestion was offered. Repeat callers from the same building required full re-entry.
Avg. 4.2 min per registration
✦ After · AI-Native
LLM-Assisted 2-Confirm Flow
As the operator types the unit number, the LLM pre-fills building, contacts, and open ticket history. Complaint type is auto-classified from free text. Priority is ML-scored instantly. The operator confirms or adjusts — average completion drops from 4.2 minutes to under 50 seconds.
✦ Avg. 48 sec per registration
Before · Legacy
Flat Chronological Ticket List
All 162 open calls displayed in order of entry. No visual hierarchy. No urgency signal. SLA countdown only visible on individual ticket detail pages. Operators relied entirely on memory and experience to mentally prioritize — a cognitive load that compounded under pressure.
Prioritization: 100% manual
✦ After · AI-Native
ML-Sorted Priority Surface with Cluster Awareness
The dashboard reorders itself by AI urgency score every 60 seconds. Tickets in the same building cluster are visually grouped and surfaced together. SLA breach risk is surfaced inline — 45 minutes before breach, not after. A floating AI insight bar proactively narrates what requires attention right now.
✦ Prioritization: AI-augmented
Before · Legacy
Manual Mechanic Dispatch
Supervisors held a mental map of who was where and what they were capable of. Dispatch decisions were made over phone or internal chat. No system support, no proximity data, no skill-matching, no workload visibility.
Decision speed: slow · Error-prone
✦ After · AI-Native
Agentic Dispatch with Confidence Score
For every high-priority ticket, the system automatically evaluates all available mechanics and surfaces a ranked recommendation with live ETA, skill match percentage, and current load. One-tap dispatch. Human remains in control — the AI removes the legwork, not the judgment.
✦ Decision speed: < 10 seconds
05 · AI Feature Design

Six AI-native features that change the game

✦ LLM · Agentic
🧠
Intelligent Callback Registration
The registration form listens as the operator types. An LLM pre-fills unit history, building context, open tickets, and contact details. Complaint text is parsed and auto-categorized. Priority is scored. Forms that took 4 minutes now take under a minute — with higher accuracy.
ML · Predictive
Predictive SLA Guardian
An ML model runs continuously, tracking open tickets against available resources, traffic patterns, and historical resolution times. It flags breach risk 45 minutes before it occurs — giving supervisors time to act. No more surprises. No more post-breach apologies.
Automation · AI Routing
📍
Agentic Mechanic Dispatch
For every high-priority fault, the system evaluates all mechanics in real-time — proximity, certification level, current job, performance history — and recommends the optimal assignment with a confidence score. Human approves; AI does the analytical work.
Building Cluster AI
🏢
Multi-Fault Building Clustering
The system detects when multiple faults exist in the same building and surfaces them together with a dispatch bundling recommendation — reducing mechanic travel time and maximizing efficiency per site visit.
NLP · LLM
💬
Conversational Report Builder
Replaced 40+ static report templates with a natural language interface. Operators type queries like "SLA breaches by building in Q1" and receive structured, visual, exportable reports in seconds. The LLM interprets intent, queries the data layer, and explains the results.
Voice UI · On-Device
🎙
Field Technician Voice Assistant
For mechanics in the field, a voice-driven AI assistant surfaces unit history, guides diagnostic steps, and captures job notes hands-free. Reduces time spent on phone with dispatch and enables complete documentation without touching a screen mid-repair.
06 · Process & Leadership

How I led a team of 5 through complexity at speed

Phase 01
Discovery & Contextual Research
Led a 6-week research sprint across 3 sites in UAE — shadowing field operators, conducting contextual interviews with mechanics, and running cognitive walkthrough sessions with supervisors. Mapped 47 distinct pain points across 9 user roles. Established a research repository shared across the 5-person design team to align on findings before divergent exploration began.
Field Research Contextual Inquiry Stakeholder Interviews Journey Mapping
Phase 02
AI Opportunity Mapping
Ran collaborative workshops with engineering and data science to map every identified pain point against AI/ML capability. Prioritized 11 AI integration opportunities by impact-to-feasibility ratio. This was the phase where I established our design principle: "AI should remove cognitive load, never add it." Every AI feature had to be explainable, controllable, and degradable without breaking core workflows.
AI/UX Strategy Cross-functional Workshops Prioritization Frameworks
Phase 03
Parallel Design Sprints
Structured the 5-person team into two workstreams: core platform experience (dashboard, forms, dispatch) and AI layer integration (insight patterns, recommendation surfaces, LLM interactions). I reviewed work weekly across both streams, maintaining design coherence through a shared component library and weekly critique sessions modeled on design studio methodology.
Sprint Planning Design System Figma Critique Culture
Phase 04
Prototype Testing with Real Operators
Built high-fidelity interactive prototypes in Figma and ran 3 rounds of usability testing with 12 operators across experience levels. Key AI interaction challenges surfaced: users initially over-trusted AI recommendations, treating them as commands rather than suggestions. Iterated on confidence score framing, explanation depth, and "override" prominence to calibrate appropriate trust.
Usability Testing High-fi Prototyping Trust Calibration Iteration
Phase 05
Design System & Handoff
Formalized a design system with 180+ components, including a dedicated AI interaction pattern library — covering recommendation surfaces, confidence indicators, explainability drawers, and agentic action states. Coordinated handoff with 4 engineering teams across frontend, backend, ML, and mobile. Established a living component documentation site that engineering teams now maintain alongside the design team.
Design System AI Pattern Library Dev Handoff Documentation
07 · Outcomes

Results that validate the approach

81%
Reduction in callback registration time — from 4.2 minutes to under 50 seconds with AI-assisted form flow
94%
AI dispatch recommendation accuracy in pilot testing — operators accepted 9 in 10 suggestions without modification
↓38%
Decrease in SLA breach rate during pilot — driven by predictive alerting surfacing at-risk tickets 45 min before breach
40→1
Static reports replaced by a single conversational NLP interface — dramatically reducing operator cognitive overhead
180+
Design system components shipped, including the first AI interaction pattern library in the organization's history
★ 4.6
Operator satisfaction score (out of 5) in post-pilot survey — up from 2.9 with the legacy system

AI Transparency is a Feature, Not a Footnote

Users who understood why the AI was recommending something were 3x more likely to act on it confidently. Explainability UI — the "why" behind a recommendation — wasn't a nice-to-have. It was the difference between adoption and skepticism.

Agentic UX Requires a New Design Language

Traditional UX patterns — forms, flows, menus — don't describe what happens when a system begins acting on behalf of a user. We developed new patterns for agentic confirmation, graceful override, and AI state communication that don't yet exist in common design systems.

Leading Across Disciplines is the Real Design Challenge

The hardest design decisions weren't visual — they were organizational. Aligning engineering, data science, product, and field ops around a shared AI philosophy required as much facilitation craft as design craft.

The Best AI Designs Disappear

When AI is working well in a UX, users don't say "the AI helped me." They say "I got through the queue faster today." The goal of AI-native design is not to showcase intelligence — it's to make the human feel more capable.

Want to see the full prototype?

Interactive Figma walkthrough, design system documentation, and full research findings available on request.

Get in Touch ↗