[ad_1]
Tech giant Google announced in March that it additional fall detection capabilities to his Pixel Watchwhich uses sensors to determine if a user has taken a hard fall.
If the watch does not detect a user’s movement for approximately 30 seconds, it vibrates, sounds an alarm, and displays prompts allowing the user to select whether they are fine or need assistance. The watch notifies emergency services if no response is chosen after one minute.
In part one of our two-part series, Edward Shi, Product Manager in the Android and Pixel Personal Safety Team at Google, and Paras Unadkat, Product Manager and Fitbit Product Manager for Wearable Health/Fitness Sensing and Machine Learning at Google, sat down with MobiHealthNews to discuss the steps they and their teams took to create Pixel’s drop detection technology.
MobiHealthNews: Can you tell me about the fall detection development process?
Paras Unadkat: It was certainly a long trip. We started this a few years ago, and the first thing was, how can we even think of collecting a data set and understanding just the revenue from a motion sensor perspective. What does a fall look like?
So in order to do that, we consulted quite a number of experts who worked in a few different university labs in different places. We kind of consulted on the mechanics of a fall. What is biomechanics? What does the human body look like? What are the reactions like when someone falls?
We collected a lot of data in controlled environments, just like self-induced falls, having people strapped into harnesses and just, like, loss of balance events and seeing what that looked like. So that kind of kicked us off.
And we were able to start that process, building that initial dataset to really understand what falls look like and really break down how we actually think about detection and what kind of analysis of fall data.
We also started a major multi-year data collection effort, and it was collecting sensor data from people doing other activities without falling. The key is to distinguish between what is a fall and what is not.
And then we also kind of, in the process of developing this, we had to figure out how we can actually validate that this thing works? So one thing we did was we went to LA, and we worked with a stunt team and we just had a group of people take our finished product, test it, and use it basically to validate that in all these different activities that people were actually participating in the falls.
And they were trained professionals, so they didn’t hurt themselves to do it. We were actually able to detect all of these different types of things. It was really cool to see.
MNH: So you worked with stuntmen to see how the sensors worked?
Unadkat: Yes, we did. So we had a lot of different types of falls that we had people do and simulate. And, in addition to the rest of the data that we collected, it gave us this kind of validation that we could see this thing working in real-life situations.
MNH: How can he tell the difference between someone playing with their child on the floor and hitting their hand against the floor, or something similar, and taking a substantial fall?
Unadkat: So there are several ways to do it. We use sensor fusion between a few different types of sensors on the device, including actually the barometer, which can actually indicate the change in altitude. So when you fall, you go from a certain level to another level, then to the ground.
We can also detect when a person has been stationary and lying down for a certain period of time. So these kind of foods in our exit from, like, okay, this person was moving, and they suddenly had a hard impact, and they weren’t moving anymore. They probably took a hard fall and probably needed help.
We also collected large datasets of people doing these kinds of things that we were talking about, like free living activities throughout the day, no drops, let’s add that into our machine learning model from these huge pipelines that we created to get all of this data and analyze all of it. And that, together with the other dataset on actual high-impact falls, we are able to use this to distinguish between these types of events.
MNH: Does the Pixel constantly collect data so that Google can see how it performs in the real world in order to improve it?
Unadkat: We have an opt-in option for future users where you know, if they sign up, when they get a fall alert, that we’re getting data on their devices. We will be able to take this data, incorporate it into our model and improve the model over time. But this is something that, as a user, you have to manually enter and press “I want you to do this”.
MNH: But if people do it, then it’s going to be continuously improved.
Unadkat: Yes exactly. It is the ideal. But we are continuously trying to improve all these models. And even continuing to collect data internally, continuing to iterate and validate it, increasing the number of use cases we are able to detect, increasing our overall coverage, and decreasing the type of false positive rate.
MNH: And Edward, what was your role in creating the fall detection capabilities?
Edward Shi: Working with Paras on all the hard work that he and his team have already done, basically, the Android Pixel security team that we have is really focused on protecting the physical well-being of users. And so there was a great synergy there. And one of the features we launched before was car crash detection.
And so, in many ways, they are very similar. When an emergency event is detected, in particular, a user may be unable to get help for themselves, depending on whether they are unconscious or not. How then can we escalate that? And then making sure, of course, that false positives are minimized. In addition to all the work the Paras team has already done to ensure that we minimize false positives, how from experience can we minimize this rate of false positives?
So, for example, we check with the user. We have a countdown. We have the haptics, and then we also have an audible alarm, the whole UX, the user experience that we designed there. And then, of course, when we actually call the emergency services, particularly if the user is unconscious, how do we relay the necessary information so that an emergency call taker can figure out what’s going on and then dispatch the appropriate help to that user? And so that’s the work that our team did.
And then we’ve also worked with emergency dispatch call receiving centers to kind of test our flow to validate, hey, are we providing the necessary information so they can sort? Do they understand the information? And would it be useful to them in an actual fall event, and we made the call for the user?
MNH: What kind of information would you be able to get from the watch to pass on to emergency services?
Shi: Where we come into play, it is essentially that the whole of the algorithm has already done its good job and said: “Okay, we detected a brutal fall. Then, in our user experience, we do not make the call as long as we did not give the user a chance to cancel it and to say:” Hey, that is fine. “So, in this case, now we assume that the user was.
So when we make the call, we’re actually providing a context to say, hey, the Pixel Watch has detected a potential hard drop. The user didn’t respond, so we can also share that context, and that’s the location of the particular user. So we keep it pretty succinct because we know that succinct and concise information is best for them. But if they have the context that the fall happened, and that the user may have been unconscious, and the location, hopefully they can send help to the user quickly.
MNH: How long did it take to develop?
Unadkat: I’ve been working on it for four years. Yeah, it’s been a while. It started a short time ago. And, you know, we had initiatives within Google to kind of understand the space, collect data and things like that long before that, but with this initiative, it kind of ended up being a little bit smaller and started growing.
In part two of our series, we’ll explore the challenges the teams faced during the development process and what future iterations of the Pixel Watch might look like.
[ad_2]
Source link