All projects

AR Annotations

(Coming soon!)
Product Design, Mobile Design, Augmented Reality
1 Designer (me!), 1 PM, 1 SWE, 1 Senior Technical Designer
Sept 2019 - Jan 2020


I worked on an augmented reality application called Remote Assist that enables factory floor technicians and engineers to get remote assistance on complex machinery via 3D annotations. Our clients were reporting issues with annotations being placed inaccurately in their environment. My team and I created a multifaceted solution mixing an algorithmic change in how AR Core worked with a new input model that allowed the placement experience to be more clear.

No items found.


While the HoloLens version of Remote Assist had been out for 2 years already, the mobile version had just been released before I joined the team. Because of this, we had a lot of tech and design debt address. After gathering the most pressing customer feedback, we found the top issue was in the accuracy of the annotations. came to these three problem areas:

  1. Annotation placement
  2. Spatial tracking
  3. Placement algorithm

I tackled all three facets of the problem with their own unique solutions:

  1. Annotation placement → Center-locked Model
  2. Spatial tracking → Error Messaging
  3. Placement algorithm → Algorithm Modification
No items found.
Center-locked Model

I designed a new patented placement experience that:

  • Avoids "fat-thumbing" where the user’s thumb covers the the arrow as they’re trying to place it
  • Encourages user to move their phone around more = more feature points tracked = more accurate tracking
  • Allows for constant visual feedback on the tracking plane
  • Allows for rotation interactions that creates parity in the HL experience
Error Messaging

I also created a series of error messaging that would give more precise feedback when tracking wasn’t working. Instead of giving them a generic error message, we’d tell them specifically what they should do to correct it.

Algorithm Modification

Lastly, I worked with my SWE and design prototyper to understand how the algorithm was detecting planes to place annotations onto and created our own solution that generated mini planes out of features points which ended up being more accurate.

the process

I iterated through many versions exploring different placement experiences, different rotation interactions, and visual indicators of the grid and what the camera was picking up on.

No items found.


This project taught me that when you bring multiple disciplines into the brainstorming process, you can create more thorough solutions.

Next steps for this project would have been iterating on each feature based on customer feedback

  • The too close message pops up too every 10 seconds...” → Add buffer time between each error message
  • “Too dark shows up when I’m in a sufficiently lit area” → Adjust thresholds