This is just a collection of things that are on my mind. They aren't technical research ideas (but I have a lot of those and know where to find more if you need them). Some of these are questions that I think are relevant to the ubiquitous computing field. I love to think through them in my head and hope to write about them as I get older and experience more in the field. I've also left some quotes that I think are personally relevant.

Old versus new hardware

The definition of ubiquitous computing that I've heard, borrowed, and tweaked for my own taste is the following: "Ubiquitous computing takes advantage of technology that already exists in new and interesting ways. In some cases, that doesn't work, so we try to find a small modification that will make the existing technology work. If that fails, then we try to create a new device with some constraint (usually size, cost, and/or power)." The most contentious part of that definition is the idea of "technologies that already exist". I think there's two schools of thought. One school takes that statement literally. By taking advantage of technologies that are already out in the world, smartphones and WiFi being notable examples that come to mind, a good idea can be deployed to millions with a simple download. The other school of thought interprets the statement to say "technologies that already exist or are about to come out". One of the reasons I like ubiquitous computing and human-computer interaction so much is that it keeps a finger on the pulse of what is going to be hot in the near future. For example, as soon as the HoloLens was announces, people started thinking of interesting use cases and what kind of hardware they would like to see on it. Upgraded models of existing devices can also make some problems easier to solve. For example, when smartphones started to include two microphones, researchers got really excited because it opened the door to using audio for determining relative phone placement.

I've had discussions debating these schools of thought all the time, particularly with respect to mobile health screening apps. Pretend we want to develop an app that uses the smartphone's microphone. Most smartphone microphones can record up to 44.1 kHz, but pretend that there's a device that provides is sensitive to any frequency up to 96 kHz. If there's a medical condition that we know can be captured in the 80 kHz range (no idea what condition that would be) and maybe some features in the 40 kHz range, what devices should we target? The first school of thought I mentioned earlier would say to try to use the features at the 40 kHz range. From an academic standpoint, it may be more challenging; furthermore, the app could be used immediately by more people. The second school of thought would say, and I will borrow a quote that I use often from a labmate, "Why do it the hard way when you can do it the right way?". If one smartphone model has a fancy microphone, maybe that will be the norm in the future, so why solve a problem that will be obsolete in a few years? You have to be really forwarding thinking if you operate in this school of thought, because maybe you make a prediction about how things will be and it turns out being wrong. If you make the right decision, though, you save a lot of hassle. Personally, I tend towards the first school of thought when it comes to health applications in particular. When we're talking about people's lives, I think immediate deployment is a very powerful thing. Even if phones "will have" a feature a few years down the road that makes a problem easier, if I can come up with a solution to that problem now to help someone live a better life, I think that's powerful. But believe me, I can see the argument in the other way as well, and I think that the argument is much strong when we think about areas other than health, like interaction and home sensing.

Why smartphone health apps?

Take this hypothetical situation. Imagine that you are a physician. You have a patient who walks in, distraught and clutching their phone. They exclaim that they want to get tests done because an app told them they have lung cancer. What are you, the physician, supposed to do in this situation? Is the patient's concern enough to order a test? How are you supposed to trust the app? The obvious answer would be FDA approval, but I'm not sure that an FDA stamp solves everything. Mobile health screening apps are installed on phones that don't come from or stay in the clinic. Everyone has their own phone, which can have their own respective quirks. What if a diagnosis relies on the microphone and my microphone was damaged by water? Ideally, developers will have measures in place to check the quality of the data their systems analyze, but can they account for every possible issue? I know that this is a question that the FDA is actively thinking about right now, but no matter their decision, won't some physicians and doctors still be skeptical? Also, I'm sure apps for different conditions would lead to different actions from the physician. If an app says that someone has cancer, a doctor would be hesitant to immediately order a biopsy; if an app says that someone has high blood pressure, though, that's something that the doctor can check themselves with no extra cost. Where do clinicians draw the line? Is it simply a matter of whether a standard clinical test is free or not? The way that the data is presented is also important. Saying that a person has "high blood pressure" versus "112/83 mmHg" blood pressure have different consequences in terms of trust for the clinician or stress for the patient.

In the end, I view the area of mobile health as one that should focus on screening apps, not necessarily diagnostic apps. When met with uncertainty, these apps can give more false positives (saying you might have the disease when you don't) instead of false negatives (saying you might not have the disease when you do). At the very least, it gets a person's foot in the door of the clinic that may not have gone otherwise. Do other people see it this way though? In developing or remote regions, there may not be a clinical test to follow up with, so maybe traveling physicians may just take the word of the apps. We could just end up with WebMD-syndrome to the extreme. I think it would be really interesting to survey clinicians on these kinds of questions.