To expand upon my work last week with the MIRROR workout mirror, I tried to imagine what a voice user interface (VUI) would be for the MIRROR. ( if you need a refresher as to the MIRROR workout mirror as a product, feel free to view my blog from last week or check out their website! I think it’s pretty cool!) VUI might not be the most traditional choice for a product with a heavy visual interface. However, it is important to include multiple pathways of interaction for each device, and including a VUI can create a more accessible product.
A VUI can allow users to enjoy a richer experience. It could also allow users to control the MIRROR during an intense physical workout or while looking away from the MIRROR. With these justifications in mind, this was my first time prototyping VUI and it was a challenge!
User: Users of MIRROR, People who workout
Purpose: To assess the usability of a VUI interface with a visual product
Function: To shape future development guidelines for VUI as an additional pathway for interaction
The Voice User interface for the MIRROR would act as a supplementary feature to the workouts. Users can ask to turn on/off the mirror, start workouts, pause workouts, listen to statistics, and take pictures . Currently, these features are all executed through a paired application. However, in order to streamline this interaction, a VUI(or gestures) could replace the application.
Sample Dialogues + Table Read
When prototyping VUI’s, creating scripts is a good strategy to work out some preliminary discontinuities. I wrote five initial interactions based on the video demos I watched last week that could be executed with voice commands.
As recommended by Kristin and Pearl Ch.2, I then had a table read with my roommate. I asked her some preliminary questions and framed the situation for her. We also sat in front of a long mirror so we could simulate a real testing environment.
Below is the script I used and a photo of the scene used.
- Starting a workout
User: Hello Mirror
Mirror: Hello name. Welcome to MIRROR. What would you like to do today?
User: I want to work on abs
Mirror: Okay searching abs, you did abs explode with Kristen last week, would you like to do this again?
User: No, I want to try something new
Mirror: Okay I recommend, 30 Minute abs with Marc, It has 4.6 stars out of thousands of reviews.
User: Okay let’s start that.
Mirror: Starting 30 Minute abs with Marc
User: Pause workout
Mirror Pausing Workout
User: Restart Workout
Mirror: Restarting workout
3. Finish workout
Mirror: Workout completed, Congratulations — — — , would you like to review stats or take a photo
User: Take a photo
Mirror okay: Taking a photo in 10…9….8……… click
User: Oh I don’t like this photo
Mirror: Would you like to take another photo?
Mirror okay: taking a photo in 10…9…8…..click
User: oh that’s better
Mirror: would you like to save this photo
4. View Stats
User: View stats from last workout
Mirror: Retrieving stats from workout… Okay, your last workout was 30-minute abs with Mark. Would you like me to read your statistic summary?
Mirror: You burned 200 calories, your heart rate was in the target range and you reported feeling strong and healthy.
5. Turn off the mirror
User: Okay all done
Mirror: Goodbye Nicole, nice work today
After reading this script with my roommate Nicole (shoutout for being my tester :) I noticed that some of the wording wasn’t very natural. Like saying “Hello MIRROR” is longer and more exhausting than saying “Hi MIRROR or simply MIRROR”. I also noticed that I hadn’t built in as many error states as I should have. This script was a start, but after the table read, I realized how many different possible scenarios I had to build for. What if Nicole wanted to work out and just wanted to browse ab workouts, or what if she wanted to take multiple pictures and save them all. With all of this in mind, I expanded my script into a VUI flow as shown below.
This VUI does not incompass all interactions, and is composed of one multi-step flow and three individual flows. I originally scripted all of these interactions chronologically to tell a story. My user would start the MIRROR, pick a workout, pause a workout, finish and take pics after workout, view stats after workout and then turn off the MIRROR. However, after further investigation I realized these were separate interactions.
In this flow the user wakes the mirror up from it’s inactive state by saying “Hey MIRROR”. Similar to Alexa, Siri, and Google, the MIRROR recognizes it’s name and responds to the user. The MIRROR is mainly used for work out purposes but has many other supplemental features such as heart rate tracking and photo taking, which all lend themselves to a general welcoming statement. Becuase the core function of this product is to access at home workout, the user can say one word such as “ABS” and the MIRROR can recognize that the user wants to do an ab workout. Using this keyword, the MIRROR generates ab workouts and accesses the users preferences and use history. With all of this information the MIRROR suggests a frequented workout. The user decides they want to choose something new. So the MIRROR defaults to recommending an ab workout with a lot of positive reviews. The user chooses the do this workout, but if they want to browse other choices the MIRROR asks for more specifics or relies on visual browsing.
During the workout, users might want to pause, and can simply say “pause workout”. This feature was designed with rest in mind. Working out at home means that life can and will interrupt. A voice command that pauses workouts allows for and supports the flexibility needed to workout at home.The user can easily restart by saying “ Restart workout”. This feature would time out after an hour, but the progress is saved and retrievablee at anytime.
Take a Photo
One of the unique features of the MIRROR workout mirror is the ability to take progress photos. This feature is particularly relevant because of the increased prevalence of fitness influencers and increased popularity of progress pictures. People like to take pictures of themselves, and this should be a hands free endeavor. With a simple vocal command “ Hey MIRROR, take a photo.” MIRROR then sets a timer and takes a picture. After the picture is taken the user can choose to save it to a gallery or retake the photo.
Another really great feature of this product is the data tracking. For this feature, I wanted to provide an additional pathway to access this information. The user can ask for a summary of their last workout and the MIRROR will provide an answer including calories burned, heart rate, and perceived exertion. If a user wants a more in depth summary, they can read further on their statistics page.
Lastly, it was important for people to be able to deactivate the MIRROR at anytime. With a simple, “Goodbye MIRROR” a user can deactive their mirror. After this flow, the MIRROR will return to the inactive state as an elegant home decor piece.
Analysis + Evaluation
One thing that worked well in testing the VUI was referencing the Pearl chapter and creating a script. The table read helped me understand where some of the awkwardness in the interaction came from and how I could better reframe the conversaation. With this information, it was easy to make adjustments where needed, and fit my VUI into a diagram.
In terms of testing, it was challenging to view a table read as true “testing” because it didn’t really feel like I was testing anything. In the future, I would hope to test with more people at a table read. I also think it would have been helpful to actually use a speaker or the voice of the VUI as part of the table read. If I were to do this test again, I would probably supplement a table read with a more open situational approach. This would allow me to test and prepare for error cases.
However, I really struggled with feature discoverability in my testing. I had a lot of ideas and I wasn’t sure how to disclose these ideas to the user. I moved the discoverability comments introducing features around in the order of the interactions.
Overall, this week was very challenging and I learned a lot. I struggled with how to incorporate a VUI into a product that is inherently visual, and even considered changing my project entirely. I stand by my reasoning that including gestures, VUI, and a visual interface creates multiple platforms of engagement and is valuable to the user.
I have a lot of respect and admiration for people who are VUI designers because it is so intricate and challenging. My work feels a bit incomplete this week because I only prototyped five possible interactions when I know there are so many more possible interactions to be prototyped. I only diagrammed part of the interaction and I felt overwhelmed. I cannot imagine prototyping an entire interaction.