MHCI Capstone Project
Improving procedure execution for NASA astronauts
My Masters of Human-Computer Interaction (MHCI) capstone project at Carnegie Mellon University was a 32-week long industry-sponsored team project. It covered a full product development cycle including research, design, and development. As the UX Design Lead, I led all prototyping and user experience design research and development while participating in all other aspects of the project.
Role: UX Design Lead
Project Type: Capstone Project
Team: Vivek Pai, Holly Brosnahan, Ron Kim, Andrew Park
Platforms: Augmented Reality Display, Wearables, Tablet
Tools: Google Slides, Photoshop, InDesign
Timeline: 32 weeks
Our Mission from NASA
NASA tasked me and my team with envisioning how an astronaut might execute a procedure with which he is unfamiliar. Aboard the International Space Station (ISS), astronauts diagnose maintenance issues and perform experiments with access to operators on Earth, who are able to clarify or work around potential challenges. However, in future manned missions to Mars, an operator may not be immediately available to assist, due to communication delays. With this in mind, NASA also asked us to consider how astronaut autonomy could be better enabled during procedure execution. Our team explored the context in which astronauts work and developed a forward-facing hardware prototype suitable for executing tasks during future missions.
We spent two months researching astronaut experiences and other analogous domains. Through interviews with retired astronauts and a literature review, we isolated the important characteristics of an astronaut’s experience. We used these to compare against different domains that we could actually observe on Earth. Altogether we performed twenty interviews, four contextual inquiries, and participated in three experiential learning opportunities. We also performed a literature review, current technology review, and two low fidelity experiments.
2. Research Analysis
After completing our preliminary research, we spent one month analyzing the data. We created 12 sequence models and 11 flow models to display the procedural and interpersonal information that we gathered from the domains. Then, we organized our thousands of notes into an affinity diagram to find the patterns in the data. From all of these notes we distilled four main insights, which we presented to NASA in May.
After presenting our insights, I led a visioning session in which our team and members of the HCI team from NASA bodystormed (brainstormed by physically acting out situations) solution ideas based on prompts derived from our insights. As a result, we narrowed our options down to four design directions. We made this decision based on the ideas’ feasibility, impact level for astronauts, and excitability factors, since our final product will be used to demonstrate the possibilities of future technology for NASA.
These discussions led us to decide to focus on clarifying instructions with the help of the Internet of Things. Within this concept we intended to use sensors on tools to determine their current location, use sensors to allow for self-referential directions in procedure documents, and to explore different methods for improving spatial navigation and cognitive orientation.
4. Low Fidelity Prototypes
We created three low fidelity prototypes to test our theories on ways to track cognitive status through a procedure and provide guidance to the location of tools. In each prototype we explored a different platform: augmented reality goggles, wearables with audio instructions, and a tablet.
We used our prototypes to walk participants through a simple, but unfamiliar cooking procedure. These usability tests taught us more about optimal cognitive and spatial orientation styles, let us explore potential platforms, and highlighted several areas for improvement for our product development.
4. Medium Fidelity Prototype
For the medium fidelity prototype, we changed our focus to one of the NASA procedures from the Rodent Habitat, a recurring science experiment on the ISS. During this procedure, the astronaut is often up to her elbows in protective equipment in a containment unit. This constraint led us to move forward with an augmented reality platform that left the users’ hands free to complete the task while still including a scannable, visual interface.
After returning from the Augmented World Expo, we determined that currently retailed augmented reality glasses are expensive and do not have enough capabilities yet to realize our goals. Since our initial prompt told us to consider technology available within five to ten years in the future, we decided to build our own mocked-up version of AR glasses for our medium and high fidelity prototypes instead.
The medium fidelity prototype was a simulation of augmented reality in the form of a cell phone attached to a helmet, serving as a heads-up display. The mid-fidelity prototype included a set of slides based in a web browser that was shown on a phone. The phone was mounted on a bike helmet so that the participant was able to see it within the top of their vision, but also able to view the environment below it. To control the interface, the participants spoke verbal commands that we implemented using Wizard of Oz techniques. We also controlled directional arrows to lead the participants to tools during the setup section of the procedure.
The usability tests for this prototype suggested that we were moving in the correct direction by using responsive directional arrows, affordances to assist verbal command usage, and various elements to improve cognitive orientation such as a ratio-based progress bar showing both step and rough timing estimations, a task overview, and supplemental images to help clarify instructions.
5. High Fidelity Prototype
For our high fidelity testing, we created a more technologically advanced prototype using a transparent interface, speech recognition, and beacon technology. All interactions with the interface were automated except for the navigational arrows to tools used in the task since the beacon technology was not as robust as we had originally anticipated. Instead of using the beacons to fully automate the locating process, we were only able to use the data from the beacons to automate an indication of when the participant was within 1 meter of the beacon’s location.
Along with the technology, we upgraded the device training, procedure document, task materials, user interface, voice commands, and the physical device. The new interface was displayed as a reflection on a piece of teleprompter glass which was produced by a iPad mini (with a flipped image) from above. We traded in the bicycle helmet for a construction helmet with counter weights in the back for a more stable and comfortable physical experience. The boxes used for the Rodent Habitat were painted and reinforced to better imitate the real equipment and last through future NASA demonstrations after the project was completed. The improvements to the device training and procedure documents helped reduce confusion and the time that it took participants to complete the task.
The final prototype was very similar to the high fidelity prototype with a few usability and UI adjustments following two Operational Readiness Tests and more prototype testing. In addition to the final prototype, we made a wide range of final deliverables to both our teachers and our clients including a website, a presentation, a design booklet, design recommendations, and a video sketch.
For more details about this project, please see our website.