|
This was a 2-semester-long capstone project in collaboration with IQ Solutions to design high-level product concepts to support individuals living with MCI (Mild Cognitive Impairment), early-stage ADRD (Alzheimers Disease and Related Dementias), and their caregivers. We designed a system using smart speakers and a smartphone that allows individuals with MCI to complete and confirm activities based on time, task and proximity. The design and prototype to be developed during this project will be used to help secure funding for future development.
We used a modified version of the Remote Design Sprint Methodology to test several different product concepts across 5 sprints. We extended the sprints to 4-5 weeks each. |
|
Sprint 1: Understanding
Detecting Proximity
The Problem SpaceWe started our journey with some baseline research of the problem.
We also saw an opportunity in smart speakers in particular as a solution format— Smart speakers are already pretty widely used in MCI communities since it’s a pretty intuitive interface, that’s good for things like reminders and getting information quickly. In general, In 2017 the average number of smart speakers per household was 1.7. That has grown to 2.6 smart speakers per household in December 2019. As we researched, we found that for the MCI and ADRD population, there are currently solutions that help manage things like medication, but not many products for bringing joy to these people. Staying socially and cognitively active is really key to delaying more advanced stages of ADRD. The ScopeWe first designed for the baseline technology. IQ Solutions required that the system must build upon currently existing smart speakers to keep costs low-- in order for the system to work best, users would ideally have multiple smart speakers throughout their homes.
We determined that proximity detection was the most pressing issue to tackle first. Smart speakers remind the user at a designated time, but the user may not be close to the speaker, or might not be in the correct room or situation to complete the task. This is especially an issue for individuals with MCI and early-stage ADRD; they might forget the reminder before they go to the correct location for a task. Being unable to successfully complete IADLs (instrumental activities for daily living) at the right time can prevent this group from aging in place independently in their homes. How might we detect when someone is close to a smart speaker to provide notifications at the right time, in the right place? |
The Initial Sketch: How does it work?
From various presentations we attended and our talks with experts, we learned that medication adherence is one of the biggest issues for our user group, so our scenario involved the user setting up the system and the system reminding them to take their medication in the correct location and time. Once the user confirmed that they took their medication, the system would send a notification to a designated caregiver. This is the sketch I did showing this in action, which we ended up using for our final design.
|
What's in the system?
From our problem space, we designed several possible solutions for detecting user proximity, and decided to go with a solution that could utilize several devices: beacons, smart speakers, the user’s phone, and a wearable.
In future testing sessions we hid the beacons unless asked, because we wanted more feedback on other portions of the prototype beyond "how it works." |
The First Mockup |
We created a clickable mockup using Figma (including narrative sides) that guided the user through a narrated scenario, which set up the speaker, beacons, watch, and phone app. I laid out the onboarding phone screens.
We didn’t have IRB approval for our project at this point, so we were not able to test with our target user group; individuals with MCI and early stage ADRD. We tried to reconcile this by just testing the prototype with older adults. |
Testing ChallengesBecause we were unable to test with our target users, interviewees had some misconceptions about individuals with MCI and their capabilities. When asked to imagine themselves as an individual with early stage ADRD, interviewees thought that they would not be able to do any of the tasks presented in the prototype.
We later presented the prototype to Emma Dixon, an early-stage ADRD expert, and she said that the tasks would not be too complicated for our user group. Despite this, we feel that the design could definitely be simplified for a better user experience. |
|
Initial Learnings
|
Sprint 2 + 3: Research and Exploration
Motivation and our Target Users
Going forward we...
When we ended Sprint 1, we were still lacking necessary understanding of our target user group. To fill this knowledge gap, we conducted a lot more supplemental research before tackling our next design-- we watched video interviews from individuals with MCI, read research papers, spoke with dementia and aging experts such as Emma Dixon, and watched expert presentations.
There are a lot of solutions out there to solve medication adherance. However, there aren’t a lot of technological solutions to preventative care for the condition’s progression, and solutions that bring joy into these people’s lives. According to our research, staying socio-cognitively active is key to staying in the MCI stage of ADRD for longer, so this became our Sprint 2 and 3 focus. New focus: What are motivational factors ("the whys") and how can we use this with contextual information to suggest things at the right time? |
Our Key Design Concepts
|
Testing Key Concepts |
To more smoothly test our ideas, we decided to go with a video showing contexts of use with 2 different use cases. This enabled us to focus on user reception for our larger concepts rather than high-fidelity interactions.
We got IRB approval, so we were able to do 12 interviews to test our 2 prototypes! Six of those interviews were with individuals with MCI, and the other 6 were with caregivers of those with ADRD. We recruited through snowballing and Facebook support groups. |
Feedback during interviews
This would be immensely helpful for me. I have depression too, so it's really difficult to get off the couch sometimes. |
I like that it's gentle and wouldn't make me feel stupid or judged like when my family tells me to do things. It seems like it would be easy to use. |
I like that it's more proactive than reactive. It encourages and stimulates you like a coach. |
What did people think about our concepts?
1. Is this more motivating than what people are currently doing?Yes: Both caregivers and users with MCI found all features to be motivating, and reduced burden on themselves and loved ones.
|
2. Will people want to do activities suggested by the system?Yes: Most participants appreciated activity suggestions due to positive reinforcement, and usefulness in habit development and maintenance.
|
3. Do people find the system intrusive or invasive? No: As long as they could turn off some data sharing features, users had minimal reservations.
|
4. Do people like the system having a more conversational tone?Yes: The conversational aspect was viewed as a key competitive advantage compared to other devices on the market.
|
What we learned about our users...
|
Sprint 4+5: Implementation
Making the Final Prototype
Since the concepts we tested were received well in video form, we moved forward with creating a higher-fidelity prototype for both the Alexa Skill and the phone application.
Since participants wanted a high level of customization, we decided that onboarding would be the focus in Sprint 4. In Sprint 5, we built out the rest of the system as it would function for 2 different use cases: gardening and guided mindful stretching. |
Product research: What onboarding questions should we ask?
How might we ask for just enough information upfront, to maximize immediate functionality without being too cumbersome to set up? I was the team facilitator for this Sprint, and I had to figure out how to reformat the Design Sprint methodology for our newly product-oriented goals. We started out by doing a lot of product research of apps for habit-forming and goal-setting. We researched over 20 different apps for this process. |
Final Prototype
We started with a Wizard of Oz testing method for the Alexa skill, by creating a detailed map of possible inputs and corresponding outputs, and pre-recording all outputs using Amazon Polly. This worked especially well for gathering potential user inputs, since we hadn't built out the full breadth of user responses yet. During Sprint 5, we used VoiceFlow to fully build out the voice prototype.
We used a Figma prototype to build out the phone onboarding. We split into two working teams; I was on the phone app team and helped with layouts, illustrations, and visual design.
We used a Figma prototype to build out the phone onboarding. We split into two working teams; I was on the phone app team and helped with layouts, illustrations, and visual design.
Onboarding
The user can answer onboarding questions either on their phone or their smart speaker, or both at the same time. Having both visual and audio output can reduce mental load. Here are some aspects of the onboarding.
Video Demo
Future research areas... |
This project will be continuing into next year with a new student team. Though we covered a lot of ground with MCI, we didn't get to touch on some important topics. We handed off these topics as potential areas to explore.
|
- How might we get this in front of people before an actual diagnosis?
Many people don't get diagnosed until the moderate stages of ADRD, when it would be difficult to adopt this system. How can we ensure we address this in marketing and language? - Re-integrating caregivers into the equation
We focused on the care recipients first and foremost for our design, but a caregiver-facing design would be necessary to keep loved ones in the loop. - Facilitating the conversation between caregivers and care recipients on privacy, needs, and boundaries within the system
With caregivers being reintegrated, there are also more issues to address with privacy and boundaries. - Building out speaker dialogue for more use cases
We built out the system only for gardening and guided mindful stretching. -
Designing for smart watch and beacons, and exploring Airtags capabilities
Those these are part of our system, we didn't address them with fully fleshed out designs and interactions. Apple recently came out with Airtags including their iBeacon protocol, which is what we assumed to be part of our system.