Summary
CTA is contracted with the US Navy to provide a suite of financial and project management applications. Our job was to help several naval departments gather, store, and analyze data so that they can efficiently produce some of our military's most innovative technologies. Hundreds of users interact with the application suite every day, each contributing their specialty to a multi-layered and complex PPBE process. Dozens of data types stream into the department from multiple external sources and are ultimately stored in our application as a record-of-truth, taking the users out of the depths of uncontrolled and often inaccurate Excel documents into a modern age of data entry and analysis with a kaleidascope of data visualizations to support decision making.
My Role
Solo UI/UX Designer in a 100% remote work environment
- Requirements - Define problem statement and ideal solution based on user requests
- Analysis - Evaluate pre-written feature lists, and ideate solutions based on all known factors
- Design - Create low/high fidelity mockups of chosen solutions. Write technical guides on how to implement the solutions
- Handoff - Review technical guide with development team
- Bugs - Evaluate and suggest solutions for frontend and UX related bugs
- Help Content - Create tutorials and aids using WalkMe
- Design System - Create and maintain a design system for the application
UX Toolbox
While I can't share actual deliverables and screenshots, these are some of the UX tools I employed during Empathy, Discovery, and Ideation exercises:
Achievements
During the 10 years working on this application, I designed, implemented, and tested hundreds (potentially thousands) of features, honestly I lost track.
Business Impacts
- Reduced quanitity of bugs found in test and production environments by introducing process improvements and stakeholder communication.
- Reduced developer rework by adding detail to UX/UI deliverables regarding expected outcomes and acceptance criteria.
User Impacts
- Improved consistency, recognition, aesthetic, and several other usability heuristics by incrementally overhauling the UI with a modern framework and creating a design system to establish standards.
- Designed flexible and autocalculated worksheet pages that closely reflected user's standard workflows so that they could generate data views in one place instead of relying on excel.
- Designed a global search feature that would allow users to quickly find singleton or grouped data objects from over 12 years of archived data so that users could spend more time completing tasks and less time digging for information.
- Designed a personalized and customizable menu system so that each individual user could access the data they needed without excess visual clutter.
- Designed a multitude of information architecture improvements and reorganized the sitemap so that users could shorten workflows and visualize their data instantaneously.
- Designed wizard-like forms to consolidate complex linear workflows so that users could better understand internal processes and progress.
- Designed data tagging and preference features so that users could focus on their individual current and recent workloads instead of constantly manually filtering through colleagues work.
- Redesigned dozens of web forms to pre-fill known datapoints, rearrange field order, and improve validation communication so that user's could complete data entry tasks more quickly and easily.
Problem statements were drafted from user requests so that we could better empathize with what they were asking us to build. Oftentimes we would try to analyze and interpret their request to find the core issue instead of just building exactly what they asked for.
Ideal solution statements would provide a boundary for our ideation, narrowing our UX focus to achieve a desired outcome with MVP solutions.
Problem Statement:
As a [user/role],
I need [affordance]
because [pain points].
Ideal Solution:
The ideal solution would allow the user to [affordance]
so that [desired UX outcome].
For each change request, I prepared a table of viable options to present to stakeholders and highlighted options that would create maximum UX value. The Project Manager ultimately decided which solutions to move forward with.
Solutions had to fit with the following parameters before being presented to the Project Manager:
- Technically feasible with our tech stack
- Meet all technical and behaviorial requirements
- Be achievable before a predetermined release date
- Solve the problem in its entirety with minimal effort
ID |
Solution Title |
Description |
LOE - Design |
LOE - Development |
1 |
|
|
|
|
2 |
|
|
|
|
3 |
|
|
|
|
4 |
|
|
|
|
Given my background as a developer, I was tasked with creating hefty documentation that translated the feature-list requirements, user request, chosen ideation solutions, mockups/prototypes, and frontend/backend implications into a step-by-step guide for the developers.
This document was the final output I'd provide for a feature.
- Facilitate meetings with Database Team, Development Team, and Test Teams to review scope
- Generate a recipe-like technical guide on how to build the intended design. Similar to Acceptance Criteria in a User Story, but much more developer-focused
Feature Summary
Problem Statement
Ideal Solution
Requirements
Solution Details
Mockups
Help Content
Known Test Cases
Typically a several hour meeting occured where I would brief the entire development team, walking through the solution spec and tweaking it on the fly if changes were needed or use cases were missed. This meeting ceremoniously marked the end of design for the feature(s), after which I would pick up a new request or attend to my other responsibilities.
Oftentimes I worked on 4-5 of these feature requests at a time, handing the development team what would amount to several weeks/months worth of work for them. This freed up my schedule to work on bugs, help content, and maintain the design system until the Project Manager approved the next batch of requests.
Measuring Outcomes
Quantitative Measurement
Due to security protocols we could not have any statistical analytics tools built into our application to track heat-mapping, usage, performance, etc.
Instead we relied on audit data recorded in production databases to monitor certain quantitative analytics. The values we pulled from this were only able to validate a fraction of our feature successes.
Qualitative Measurement
User Testing was also difficult to come by as we were a remote out-of-state team and the users had extremely limited availability. They directly told us on several occasions that observed/recorded user testing was not an option, so our qualitative metrics largely came from verbal/email feedback.
- High Success - positive user feedback and/or benchmarked technical performance improvements
- Average Success - little to no user feedback, the users are back to "being busy" and our audit data captured them utilizing the feature consistently
- Failure - negative user feedback, requests to revert changes, lack of database growth signaled lack of feature usage