Designing and prototyping a talking AI that helps the elderly take action when at risk of a fall.

At the end of my UX semester in Amsterdam I had a final project to complete. I was given a client to work with for 6 weeks. There is a single client with one complex project, the assignments are individual. Everyone in the class is focusing on their own design challenge within the single project. This year the client we teamed up with was Digital Life Laboratory. They developed a device based on the Kinect technology that could analyse the body movements of an elderly person and carry out necessary action to prevent the fall. They named it Bravo.


Project Introduction

In the Netherlands, every 5 minutes an elderly person is admitted to the hospital due to a fall accident. Fortunately, there are ways to reduce someone's risk of falling down. Fall prevention training or a medical examination are just two of them. However, the elderly living on their own often underestimate the risk of falling.Project DefinitionThe overall goal of the project is to prevent the elderly from falling. Divided into smaller parts my design challenge was:

"How to persuade elderly to take action before they fall. Convey  the message when elderly are in risk of falling.”


Before this project started in May 2017, we’d done preliminary research in February of the same year. There were 28 people divided into teams working on different parts of the research. We had a team working on elderly empathy. Other teams worked on the actual target user research (elderly living alone, with family), and then also teams researching the other stakeholders involved (family, friends, neighbours). There was a team for professional stakeholders research (home caretakers, general practitioners) as well. The outputs were a variety of personas, scenarios, experience maps, cultural probes and even an empathy suit. Once worn, the person wearing it is able to actually feel how an elderly person feels in terms of physical abilities. Having a really deep understanding of the field through such extensive research was crucial for creating the best solution for the problem.

Main Findings

To keep this study brief I will point out the most relevant findings from the research:

  • No touch screens. When trying to complete various tasks wearing the empathy suit, we found that using touch devices often gets very frustrating.
  • Interest in new technologies. After interviewing a few users from our target group, it was clear that they are more open to use new technologies than one might initially think. The real problem is that the technology is often not designed inclusive enough for elderly making the onboarding and also the actual use really challenging for them.

  • TV is a part of every elderly routine. From the culture probe research, where we had our potential users making notes and taking pictures of their days, I realised the majority of the people from our target group (elderly living on their own) lives are constrained by routines. Going deeper into the routine’s details, we found that the most common repeating routine was interaction with a TV device. This lead us to the finding that a TV and a remote control are the technologies that, due to being commonplace in our target group’s lives, might be one of the most appropriate communication channels.
  • The feeling of loneliness. The elderly like to talk to people. Usually being indoors on their own, they tend to feel very lonely. They often seek someone to talk to.
  • Misplacing devices. Phones and other devices are often misplaced, which fuels frustration. A common problem is also the regular need to charge the devices. Interviewing family members who take care of an elderly person, we found out that the elderly are very stubborn. In terms of persuasion, they are more willing to accept your demands when being talked to as opposed to reading something off of a screen.

Health Research

Since I have zero experience in healthcare, I needed to do health research in terms of what is actually possible for me to implement in my design. To my surprise, not much. There is a bunch of exercises to train the body and keep it active, but everything has to be done in the presence of a professional. Most of the solutions might result in making the whole experience an even bigger risk if not done right. After reading studies of the World Health Organization, I came to the conclusion that the best way to prevent falls would be for the user to just sit down and relax. When necessary, a professional would be called in.

who int

The Approach

I initially started exploring some ideas, reading articles online, talking with the elderly and sketching out first ideas of what the experience could be like. After a few feedback meetings, I found out the best channel to connect with the target group would be - voice.

Knowing the routines of my target group, I came to realise that in terms of showing feedback and visual data, the best device is an actual TV. I combined the two ideas and sketched out the prototype of the product.
I decided to create a multimodal experience using a voice interface followed by a TV interface. This created a big challenge for prototyping and usability testing. Both of these interfaces required research into voice and TV user interfaces.

Screenshot 2017-09-27 01.02.58

Sketch of a use case

The Product

I was focused on one specific scenario: when the Bravo device detects that the probability of falling is rising, it tries to persuade the user to sit down and relax. Initially, this is done by starting a conversation. The goal is not to tell the user about the danger directly, but rather keep them in a positive mood. Not scaring them off. If the device deems the health risk out of the ordinary, it also notifies the caretaker. To ensure that the user follows the relax routine, the TV interface encourages them by displaying feedback and various types of notifications.

To get an exact overview of the user’s interaction with the device and what is needed to be done, I created a brief scenario with exact experience touch points.

Experience touchpoints

The goal is to have the device handle most of the cognitive processing, making the experience as straightforward as possible for the user.

Voice Design

During the voice interface design process, I designed for two different aspects of the product. Voice input and audio output. The challenge was to create an interface that doesn't merely have good copy, but also maintains a good conversation. It is crucial to make the conversation feel as natural as possible by creating a good flow between the inputs and outputs. When the device leads a good conversation, its persuasion capabilities increase thanks to being perceived as more human. During the process I also made the distinction between the parts of the experience that should be conveyed visually and the ones that ought to be handled by via audio.

Designing for TV

Since the target group is very comfortable with interactions with a TV, it was the obvious choice when determining the feedback visualization channel. Since a TV is a device mainly for entertainment and relax, it perfectly matches the project’s objectives. Designing for a TV is different than designing for smaller devices. Usually, a user is watching it from distance, in a vastly different context. The most important part was the typography. In order to maintain good readability for the elderly user from distance, I decided to go with a wider font weight and large type size. Darkening the live TV content behind the message allowed me to create sufficient contrast and make the user shift their focus on a different element on the screen.

Scannable Document on 26 Sep 2017 at 22_25_55

TV UI sketches

Creating different variations of the notification led me to conduct a small validation session on a TV screen. The goal was to maintain good readability from standard couch distance, while covering as little screen space as possible. For displaying feedback on how much time until the relax session is completed, I wanted to use a circular bar. However, since the TV UI it is all about covering as little screen space as possible, I decided to go with a straight progress bar that covers about half the space. Keeping the user motivated to finish the task is the primary goal of the UI. This is done by displaying the progress and varied motivational messages.

Edit after assessment: As TV programmes in the Netherlands often contain subtitles, the displayed message would not be clearly readable due to the collision with subtitles. This situation unfortunately occurs even if the background is dimmed.

TV interface details Copy 3
TV interface details Copy 4
TV interface details Copy
TV interface details

Few initial variations of the TV UI.


Arriving at a clear idea about the whole experience and completing the first version of a few visual elements for the TV UI, I went straight into prototyping. I spent some time thinking about what would be the best way to showcase this solution and to test it with real users. The journey from an automatically triggered voice interface to displaying data during live TV in one user flow proved to be a rather challenging prototyping case. I decided that the best tool for this particular project would be Framer. Few hundreds lines of code into the process, I managed to create a basic prototype which could recognise human voice, speak (with a British accent :)), turn on simulation of a TV and display notifications combined with different dynamic visual elements. This enabled me to test the solution with real users without having to sacrifice some parts of the experience. I also prepared the prototype in a way that allowed me to change what the device is saying with a single command. This feature enabled iteration over different conversation types during the testing.

Make sure your microphone works properly to fully interact with the prototype. If you experience any trouble, try accessing the prototype at this URL:


The first session took place with a moderator and 2 potential users from the target group. At the beginning, I wanted to have as little influence on the users as possible. I briefly told them about the device and what it is actually doing, leaving out the details. The first testing was a profound learning experience for me, as it allowed me to identify multiple previously unseen pain points.

  1. One of the biggest surprises for me was the way people acted in the beginning of the experience. They immediately got surprised by the speaking device and were confused about the proper way to respond. It wasn't until this point that I realised how common it is for people to interact with voice interfaces. However, while using voice devices, the user usually triggers the system themselves and starts the interaction (Hey Siri, OK Google, ...). In the case of Bravo, it is exactly opposite. Bravo starts the interaction with the user. The users definitely were not used to this kind of trigger. This insight significantly altered my thinking about the product. I found out that, first, the user needs a heads-up before the start of the experience, so that they can actually prepare themselves for the conversation. Second, the device should also ask if it is appropriate to interrupt in that particular moment.
    Solution: When the device is triggered to speak (because of a high falling risk) it gives the user some kind of heads up. For example, a simple sound might do the trick. It needs to be long enough for the user to process what sound it is, and also calm enough to not scare anyone. Introducing itself, the device decreases the surprise of the initial voice message. Also, the device's first question should be if it is possible to interrupt at the exact moment. If not, the device could try in a few minutes.
  2. The whole conversation between the device and the user is too fast and too short. The device was not able to persuade the user to take appropriate action in such a short timespan.
    Solution: Add at least one more reply to the whole conversation flow. Furthermore, increase the delays between responses.
  3. The quick switch from voice to TV was also unexpected by the user. It was a little scary to see their TV turn on automatically when they were speaking to a device.
    Solution: Turn on the TV right when the device starts talking. Turning on the TV simultaneously with the start of the conversation is more intuitive. This sync provides a visual clue at the beginning of the flow that a process is starting. With the TV being turned on immediately, feedback can be displayed for the user, so that they know whether the device understood them correctly.

I made a short video illustrating one use case.


I am intentionally writing about the details of the interface and the microinteractions only after describing the testing session. The reason being that the usability testing had a great influence on these factors. The interactions are designed around a single objective - supporting the user throughout the experience. In order to make the interactions as smooth as possible, the system has to display whether the device or the user is talking at any point in time. This was achieved by creating a pulsing dot animation and changing its colour to blue (device speaking) and green (user speaking). To incentivize the user to go through with the entire experience, feedback is shown at the edge of the screen, displaying the period of time left until the end of the recommended relaxation.



Since the very beginning, this project quickly shifted towards a new territory of voice-driven interfaces, TV UI and a new approach to human-computer interaction. Together with an interesting target audience, solid research and a unique challenge in prototyping/testing solutions, this has become a very exciting project to work on. I would not consider this by any means a final product. Rather, this is a start of an exploratory journey into the world of new possibilities.

Further Development

By helping prevent older people from falling, this problem is definitely solved for the long term. What’s more, I often felt like I am making their health even worse by making them sit and watch the TV, the original intent being making them relax and wait for the caretaker. After digging more into health research I found out that there are a few easy exercises that the users could do while watching the TV. As a result, their health could improve in the long term. I definitely see potential in making the interaction flow longer and implementing an exercise session. In a broader sense, there are many other areas and scenarios in which a human-computer voice-based interaction could help the elderly around the house.