At PennAppsX, my team and I focused on building a hack that would promote healthy habits through your everyday computing
Everyone wants to improve their quality of life through better health habits, but it’s a hard thing to do when you’re busy coding and working on your computer all day. It means forming good exercise habits and having good posture. It can be hard to remember, however – so that’s why we’re making it an integral part of your routine with FitFactor. It’s two factor authentication unlocked by healthy habits.
Before you visit your favorite websites, be it Facebook, YouTube, or Reddit or whatever you add to your block list, you’ll need to reach your daily fitness goals. This means logging your goal number of steps, and sitting up straight when you’re at your computer. If you don’t, you’ll be reminded with a block page from the FitFactor Chrome extension.
We use Android Wear to track your steps, and a FitFactor Android app where you can monitor your progress and pick one of your Facebook friends as an accountability partner. If you fail to meet your goals, your friend can unlock your browser session – but you’ll be accountable to them.
We used an Intel Edison with a proximity sensor to approximate your posture and encourage you to maintain a good seating position while you’re at your computer.
- Parse backend
- Integrated Facebook login
- Intel Edison board with sonar proximity sensor
- Chrome extension + Chrome packaged app utilizing serial API to communicate with Intel Edison
- Android phone app and Android Wear app using Android Sensor APIs to record step count
Back in May, I visited the Glass Basecamp in San Francisco for a tour of the surrounding area in the city with other Explorers led by Glass Guides. The event coincided with the launch of the updated Field Trip app on Glass. The app has been available since Glass launched in a simple form powered by the Mirror API, but this was the first release that used a more feature-rich GDK app and provided much richer content and the ability to prompt the app to surface more points of interest around you in different categories.
Field Trip works in two main ways: contextually pushing cards to your Timeline that bring information about places around you, and in a voice-triggered fashion. Say “OK Glass, Explore Nearby” and the app will bring up bundles of cards featuring POIs in different categories including History, Art, Architecture, Food and Cool Stuff. The app remembers what cards you’ve seen and avoids showing you cards for places you’ve been to already, at least in contextual mode (this solves the previous issue I had, where the card telling me all the movie scenes shot in front of Bovard Auditorium kept popping up every time I cruised by on my bike).
Like most Glassware, Field Trip is highly dependent on having a good internet connection. My experimentation with the app in downtown San Francisco unfortunately came right around the time of Google’s infamous XE16 update to Glass – the first release that moved the device from Android Ice Cream Sandwich (4.0) to KitKat (4.4), a massive jump that came with major performance, reliability, and connectivity problems. Despite this, Field Trip made the hour walk around SF far more informative and engaging for me. With Glass now running smoothly on XE21, Field Trip is a great example of a well designed Glass app.
There are some limitations and obvious missing features. Field Trip would benefit from an option to save places for later research/remembering – it would be cool to be reminded of a favorite POI when you return to a location. It should offer a way to launch the accompanying Field Trip apps for Android or iOS and pull up more content about the place you’re currently looking at (in general, Glass apps don’t offer integration with their phone counterparts in ways that should be obvious, and Field Trip is no exception). It also lacks an overall map view, which would be a nice way to browse around local POIs instead of swiping through stacks of cards. Perhaps a similar pan/scroll method to the Glass web browser could be employed.
Field Trip is developed by NianticLabs, a team at Google known for the immensely popular location-based game Ingress. It comes as no surprise that Field Trip is currently one of the best examples of location-based Glassware on the store to date.
I put on my Glass for the first time on May 10th, 2013. A lot has changed since then. 20 software releases and 15 months later, Glass, and wearables in general, are still in their infancy. Today is still day one in the next wave of personal computing – and the alarm clock hasn’t even gone off yet.
I’m used to being an early adopter, but being a Glass Explorer is different than being the first to buy any other consumer device. The Explorer program – essentially a paid public beta for Google’s first head wearable platform – is unlike any other program that comes to mind. Along with 50,000 or so other Explorers, I’ve essentially become a walking spokesperson for Google over the past year – something I’m generally happy to do. I think the public reactions I get from wearing Glass nicely sum up the general public sentiments about the device: excitement, curiosity, and often, confusion. I’ve given over a hundred demos of Glass and answered an order of magnitude more questions about it. The questions I can’t answer are perhaps the most significant – when is a consumer version coming out, and how much will it cost? The road to consumer release for Glass has been rather long, and people are noticing.
In the Internet echo chamber, the fact that Glass includes a front-mounted camera apparently causes people to raise concerns about their privacy that purportedly do not exist with the 1 billion camera-equipped phones on this planet. I personally have not interacted with anyone who shared these concerns. That’s not to say these people don’t exist, just that I haven’t encountered them personally. That said, I think this general sentiment from the news media will have a role to play in determining the success or failure of Glass as a widespread consumer product. Its success in industry verticals like medicine, manufacturing, and field work is unlikely to be impacted.
From a product perspective, the trajectory of Glass has been very inconsistent. From May till about November of last year, Glass received updates approximately every month with generous feature additions, and importantly, API improvements, with the addition of the GDK (allowing native Android apps) coming at the end of the year. Then, there were no updates for several months as the Glass software team migrated the device from Android 4.0 Ice Cream Sandwich to the newest version, 4.4 KitKat. The first KitKat release was a huge regression in performance and stability. I became frustrated with the device during this time, and my usage of the device decreased from using it for the majority of every day to perhaps once a week. The good news, though, is that recent Glass updates have brought Glass back to the speedy performance it originally had. And from a development perspective, the benefits of being on a recent Android release are substantial.
On a daily basis, my most frequent uses of Glass are reading and responding to Emails and Hangouts messages, checking Calendar events, reading tweets from Twitter users that I’ve enabled notifications for, taking pictures, checking stocks, and checking weather. When I’m heading to class, I often plug in earbuds and listen to Google Play Music or Pandora (I love having earbuds connected to Glass – I never have to worry about a long tangled cord running to my backpack or pocket). I am a heavy Evernote user and sometimes use Evernote on Glass for brief text notes (I wish their app offered more functionality, like reading or browsing notes).
When I’m traveling somewhere new, I make frequent use of Glass’s excellent heads-up navigation for walking and driving, the Google Now cards for nearby restaurants, the Field Trip app for information about nearby points of interest, and the Parking card functionality to figure out where I parked and how to get back there. I’m not a frequent poster on social media but I occasionally post photos directly from Glass to Twitter, Facebook, or Google+ (I sometimes edit them on my phone and post them to Instagram, something that can be accomplished in a few clicks).
Many of the current apps on Glass provide a subset of experiences that are available on other platforms. But the most compelling feature of Glass is its form factor. I took great pictures hiking Mission Peak earlier this summer that I just wouldn’t have been able to take, or wouldn’t have bothered to take, if I had to fumble with my phone.
There are as many items in the ‘cons’ column as there are pros. Glass battery life has been a roller coaster over the past year; with the current software revision, it lasts a serviceable eight hours in my normal usage patterns. But that’s short enough to be inconvenient. My solution is to plug Glass into my laptop while I’m working somewhere on campus, or next to my desktop at home, so by the time I’m ready to leave for somewhere, Glass has enough battery to last through whatever I have to do. Like any smartphone, screen-on time and camera usage are the two biggest battery culprits. Because Glass doesn’t fold, there isn’t an easy way to store it aside from the large carrying case. So if I don’t have a bag with me, I typically must leave Glass on all the time (this typically makes sense, however – I have frames and prescription transition lenses that turn Glass into perfect eyewear for both inside and out). From a performance standpoint, Glass has fallen behind the cutting edge – the two year old TI OMAP SoC can’t match the speed and smoothness of modern smartphone chips (and it’s more power hungry too). The Glass display would be more useful if it offered more screen real estate like the Epson Moverio, or even changed the visible screen size based on the application. While I personally don’t mind the appearance of Glass, a consumer release is far more likely to succeed of the device is smaller and more closely resembles a pair of normal glasses.
Google is no stranger to public betas (Gmail was in beta for five years!). But there is inherently greater level of risk in thrusting an unproven device into the public eye. It’s been rewarding to play a part in this experiment in figuring out what the next billion personal devices will look like. In several hackathon projects, I’ve experimented with auto generating timelapses, providing contextual information about a place on Glass with Bluetooth LE beacons, and creating a real-time feed of information for students during a lecture. I’m looking forward to the next four months of focused Glass development in my Glass class at USC Annenberg to figure out what information capture and consumption looks like on the most personal computer yet.
At I/O 14, there was a lot of news around Glass, but some of the more interesting things I learned came from talking to several members of the Glass team and attending several sessions. On Tuesday, I attended a lunch hangout organized by the Glass team at El Mar restaurant at the Embarcadero. I spoke with a few Googlers and lots of other developers.
Charles Mendis is one of the senior software engineers on Glass. I believe he works on the application/framework level. He was early on the Android team and notes that the state of Glass today feels very much like Android did in the early days. A day before it went official, he was telling us about the XE18.3 update and several changes that have been made to Glass hardware over the recent months, including a 20% bigger battery (putting current capacity in the mid-600mAh range), revised nosepads, and doubling the RAM to 2GB. In addition to an increase in battery capacity, the voltage has also been increased, a change he said helps with stamina during demanding tasks like recording video.
Charles was very candid and showed that he is fully aware that the move to KitKat has been pretty rough for Glass. He told me that the team is almost entirely focused on performance and stability right now, with most new features taking a backseat to improving Glass performance back to the levels seen on XE12 and earlier firmwares. He agreed with me and a few other explorers that XE12 has been the best release so far. As he put it, it was a smattering of bug fixes and new features on top of a fairly stable codebase. Then, for the next four months, they rebased entirely on an entirely new version of Android, skipping Jelly Bean entirely. He said that he feels it will take them around two months to get performance back to where it should be. He told a story of when he was working on getting Android Eclair ready for the original Motorola Droid launch, and that they had to send an entire engineering team to one Googler’s apartment to figure out why the baseband crashed at one particular location on his kitchen table. He said that many of the bugs that crop on Glass only start popping up when the firmware is in use by a few thousand explorers (OTAs are rolled out to 10%, then 50%, then 100%, with the entire process usually taking around a week).
I asked Charles what sort of thermal management and CPU underclocking occurs on Glass, something I feel happens pretty frequently and really hampers performance especially with the KitKat firmwares. He explained to me that on phones, the entire screen can be utilized as surface area for heat dissipation. On Glass, the physical area through which to dissipate heat is a fraction of the size. Glass sometimes clocks down to as low as 300Mhz when it gets too warm (this is when we see the “Glass needs to cool down to run properly” warning on the homescreen).
I asked Charles about two major feature requests: Google Play Services, and official support for Eye Gestures in GDK apps, including the new wake feature. He said that Google Play Services is probably their number one feature request right now, but like all others, it is currently lower priority than performance and stability improvements. He indicated that the addition of Play Services to Glass is somewhat nontrivial because certain parts of the UI must be redesigned to work on Glass (Play Services doesn’t really have many user facing UI elements, but the Google+ sign-in screens are one of them. Perhaps this part will be moved to the MyGlass on the user’s paired phone, I’m not sure).
Charles said that one of the main reasons that eye gestures aren’t officially supported in apps on the MyGlass store is because the detection is constantly being worked on and that the APIs may change in the future. He said that Wink detection never really worked out as well as they had hoped, the positioning of the sensor being one of the main causes of this. He seemed to intimate that focusing on Wink might have been a mistake, but I’m not sure. I told him that I really enjoyed the Glance gesture support added in XE18 and he appreciated hearing that. He told me it took a lot of work to get right, but that the final result was something they are pretty pleased with (and of course planning to improve further). I asked if the detection is solely through the eye sensor or if accelerometer data is used as well; he said it was just the eye sensor, and they avoid false-positives by only watching during a short 3-5 second interval after a notification has arrived on Glass. He implied that if false positives are further reduced, it may be possible to use the Glance gesture as a way to wake Glass at any time (instead of tapping the touchpad or lifting the head), or perhaps accompanied with a very subtle nod downwards. He agreed that this option would be less socially awkward than the aggressive head-bobbing that is sometimes required to activate the head-tilt wake (one of the reasons I don’t really use that feature).
Talking with Charles was reassuring, as sometimes the Glass PR team and Glass Guides do a really poor job of communicating with the Explorer community and can be patronizing and vague at times. His candid nature and the honesty in which he talked about current shortcomings of Glass was refreshing. He seemed sincerely interested in the feedback that I had as well as that of other Explorers, asking many Explorers what they liked least about Glass as well as how they used it.
Jeff Harris is a product manager on Glass focusing on the MyGlass store, the app review process, and developer relations. He works alongside Timothy Jordan, and they both gave a short talk on Thursday that I’ll elaborate on later. However, I learned a lot more talking to him at the party. Like Charles, Jeff was very candid and open to talking about things that were going well for Glass as well as things that haven’t been. He said that as a user himself, he’s been using Glass less lately as the KitKat firmwares have taken a severe hit on performance and stability. I asked him what trends he noticed in terms of the overall number of Googlers wearing Glass, and he admitted that it was down, likely for the same reasons (I asked this question of my two apartment mates who intern at Google and got a similar response). He said that the team is clearly aware of the things they need to do to fix that, and are actively prioritizing those things over anything else right now. i asked Jeff about the future of the platform, and while he unsurprisingly didn’t reveal any juicy details, he did say that an SoC refresh is almost certainly going to happen before a consumer release occurs. I talked a lot about some of the GDK features that I’ve been using and what else I’d like to see (like Static Card support restored for GDK apps). He was an interesting guy to talk to and reaffirmed for me that the Glass team does ‘get it’ and that despite the fact that the public communications are sometimes unhelpful, the team itself is made up of people who have a good perspective on the current state of Glass and the developer community.
Timothy Jordan, definitely quite a character. He and his hat were both present at the party. After I introduced myself he did remember the previous email exchange we had a while back talking about the class. He said that if he or any other members of the Glass team were going to be down in LA during the semester, that he’d definitely be interested in coming in to talk to us. He told me to email him next week once I/O dies down, which I’ll do – hopefully this pans out. Would be great to get some additional support and visibility from the Glass team. As usual, Timothy was the king of non-answers to questions, so I didn’t glean anything particularly interesting out of him. I did ask about the current timeframe for Glassware submissions, and if they were open to reviewing apps early and providing feedback to smooth out the process (incidentally, both of these topics would be covered in his presentation two days later). He said that one month is typical right now, but that they are adding staff and streamlining the process to speed it up a bit. They are open to reviewing app designs early and often, even before coding starts, so he encouraged us to be proactive in doing that for our development in the class.
Aside from the Googlers I spoke to, I ran into a group of guys from Phandroid that I recognized (all explorers) as well as the developer of Winkfeed, an app for Glass that was released in December that essentially aggregates news content from a bunch of RSS feeds and pushes them to Glass via the Mirror API.
At I/O itself, I attended three Glass-related sessions: Wearable computing with Google, given by Timothy Jordan, Distributing your Glassware, given by Timothy Jordan and Jeff Harris, and Innovate with the Glass Platform given by Hyunyoung Song and P.Y. Laligand (they gave the excellent Hacking Glass session at I/O13).
The first session was fairly light content-wise but did feature one major announcement: Android Wear is coming to Glass. This is a pretty huge deal, as it means that all existing Android apps that support push notifications will instantly start working on Glass much as they do on Android Wear watches. To enable really useful functionality, developers need to use the new Wear APIs to support rich actions and additional information to be presented through these notifications, but the work involved is far less than writing a separate app for Glass just to receive notifications (it’s also much better than the Mirror API for this purpose). This may or may not have major implications to what we develop in the class, but it’s a major development nonetheless. Timothy said that this will show up on Glass in the next couple of months, which was corroborated by what Jenny Murphy (Glass developer advocate) told me when I was talking to her after the session I also asked her if Glass will support sending Wear notifications as well, but she said they are planning just on supporting receiving for now. This probably makes sense, since the phone is designed to be the hub to which the watch and Glass connect. During the Q&A portion of the session, I asked Timothy about how Wear notifications will impact the flow of the Glass Timeline, which traditionally has been split into two areas: the area to the left of the home card, which features ongoing or upcoming information, and the area to the right, which features content that has occurred in the recent past. Not all apps behave this way on Android, so it will be interesting to see how Wear notifications are worked into the Glass Timeline. As expected, Timothy didn’t really have a real answer for this (he didn’t really have an answer for most of the questions asked as far as I could tell. I get the feeling that the Android Wear on Glass plan was established fairly recently). The Wearable computing with Google session is viewable here.
Distributing your Glassware was brief but useful as it gave me some more perspective into how the app submission process for Glass works and how it differs pretty heavily from the Play Store submission process for Android. Unlike Android apps, Glassware apps are heavily scrutinized by the team because they want to make sure that the UX is fairly consistent with how Glass is designed. One of the things they announced at this session was the availability of a new Flow Designer tool, which allows screenshots/mockups of Glass apps to be laid out in a logical way and then submitted to the Glass review team for feedback even before actual development starts. Throughout the submission process, they will now be delivering feedback more rapidly piece-by-piece instead of making developers wait a month or longer before hearing back. I asked a questions about uploading new APKs – this must be done manually through a web form, although updated app versions are usually approved very quickly. I asked if they planned to offer any features similar to the Beta and Alpha channels on the Play Store – they don’t have this specifically, but they will whitelist certain Glass devices if you want to launch to the MyGlass store but keep the tester base controlled to start. I asked if they offered aggregated crash reports and other stats – they don’t, so I asked if they recommended using third party services like Crashlytics, and they thought this was advisable. They said that they will reach out to developers if they notice a lot of crashes from an app, but they don’t offer anything in real time. Jeff indicated that they are working on improving the dashboard that they offer developers, but that it’s much different than the Play Store and the team is much smaller. In response to another question, Timothy explained how GDK apps that need account authorization are deployed to Glass – after the APK is installed, the account that is authorized by the user from the MyGlass app on their phone is pushed to the device. For development purposes, this can be done over ADB, something I didn’t know.
Innovate with the Glass Platform was pretty exciting – while no new announcements were made, the demos that H.Y. and P.Y. gave were very cool and definitely showcased some more unique use cases for Glass (ironically, neither of their apps would be approved on the MyGlass store right now as they used Eye Gestures and USB OTG mode which aren’t currently allowed). The session began with an overview of the GDK layer and how it compares in size and scope to the regular Android SDK – the emphasis was that it is fairly small, and just meant to augment what is already available on Android. They gave a brief look into the modifications the Glass team made to the Android OS to make it work on Glass, specifically in Location (GPS data is piped through from the phone) and the Account manager (unlike Android on phones, accounts for apps must be provisioned on the user’s phone with MyGlass and then pushed to Glass after the APK is installed).
P.Y. demonstrated an app he had built with some other Glass team members that connected to an Adidas soccer ball via Bluetooth LE and gave instant feedback after a kick using data collected from the ball. He walked through the implementation of Bluetooth LE connectivity, using the wink sensor (not currently allowed on the MyGlass store), setting up voice triggers, CardScrollView, Card api, etc.
H.Y. demonstrated a pretty hacky setup that involved a webcam mounted on the back of her bicycle helmet, connected to Glass via ADB. Her app displayed both front and rear camera feeds on a LiveCard and allowed the user to switch. Apparently Glass puts out 300mAh over USB in OTG mode and most webcams require 500mAh so she had to use a pretty bulky setup involving a battery pack and a USB hub. It was an interesting demo that inspires some thinking about what other peripherals might be useful to explore on Glass.
In response to questions, P.Y. said that they are aware that Bluetooth keyboards and other devices no longer work post-XE16, and that they will be fixing it eventually (so it sounds like this was not intentional breakage). Android L will be coming to Glass – in fact, this may be required in order to support Wear, although I’m not sure if I understood this correctly (I’m also not sure if the Android Wear watches run KitKat or L – I’m pretty sure it’s not L).
Innovate with the Glass Platform can be viewed here.
Google I/O was exciting for a vast number of reasons, and the developments surrounding Glass show that the Glass team is working hard and has their priorities straight. I’m looking forward to the next few months as performance and stability improve, some of the most requested features make it out, and we start pushing the envelope in the class this fall.