At I/O 14, there was a lot of news around Glass, but some of the more interesting things I learned came from talking to several members of the Glass team and attending several sessions. On Tuesday, I attended a lunch hangout organized by the Glass team at El Mar restaurant at the Embarcadero. I spoke with a few Googlers and lots of other developers.
Charles Mendis is one of the senior software engineers on Glass. I believe he works on the application/framework level. He was early on the Android team and notes that the state of Glass today feels very much like Android did in the early days. A day before it went official, he was telling us about the XE18.3 update and several changes that have been made to Glass hardware over the recent months, including a 20% bigger battery (putting current capacity in the mid-600mAh range), revised nosepads, and doubling the RAM to 2GB. In addition to an increase in battery capacity, the voltage has also been increased, a change he said helps with stamina during demanding tasks like recording video.
Charles was very candid and showed that he is fully aware that the move to KitKat has been pretty rough for Glass. He told me that the team is almost entirely focused on performance and stability right now, with most new features taking a backseat to improving Glass performance back to the levels seen on XE12 and earlier firmwares. He agreed with me and a few other explorers that XE12 has been the best release so far. As he put it, it was a smattering of bug fixes and new features on top of a fairly stable codebase. Then, for the next four months, they rebased entirely on an entirely new version of Android, skipping Jelly Bean entirely. He said that he feels it will take them around two months to get performance back to where it should be. He told a story of when he was working on getting Android Eclair ready for the original Motorola Droid launch, and that they had to send an entire engineering team to one Googler’s apartment to figure out why the baseband crashed at one particular location on his kitchen table. He said that many of the bugs that crop on Glass only start popping up when the firmware is in use by a few thousand explorers (OTAs are rolled out to 10%, then 50%, then 100%, with the entire process usually taking around a week).
I asked Charles what sort of thermal management and CPU underclocking occurs on Glass, something I feel happens pretty frequently and really hampers performance especially with the KitKat firmwares. He explained to me that on phones, the entire screen can be utilized as surface area for heat dissipation. On Glass, the physical area through which to dissipate heat is a fraction of the size. Glass sometimes clocks down to as low as 300Mhz when it gets too warm (this is when we see the “Glass needs to cool down to run properly” warning on the homescreen).
I asked Charles about two major feature requests: Google Play Services, and official support for Eye Gestures in GDK apps, including the new wake feature. He said that Google Play Services is probably their number one feature request right now, but like all others, it is currently lower priority than performance and stability improvements. He indicated that the addition of Play Services to Glass is somewhat nontrivial because certain parts of the UI must be redesigned to work on Glass (Play Services doesn’t really have many user facing UI elements, but the Google+ sign-in screens are one of them. Perhaps this part will be moved to the MyGlass on the user’s paired phone, I’m not sure).
Charles said that one of the main reasons that eye gestures aren’t officially supported in apps on the MyGlass store is because the detection is constantly being worked on and that the APIs may change in the future. He said that Wink detection never really worked out as well as they had hoped, the positioning of the sensor being one of the main causes of this. He seemed to intimate that focusing on Wink might have been a mistake, but I’m not sure. I told him that I really enjoyed the Glance gesture support added in XE18 and he appreciated hearing that. He told me it took a lot of work to get right, but that the final result was something they are pretty pleased with (and of course planning to improve further). I asked if the detection is solely through the eye sensor or if accelerometer data is used as well; he said it was just the eye sensor, and they avoid false-positives by only watching during a short 3-5 second interval after a notification has arrived on Glass. He implied that if false positives are further reduced, it may be possible to use the Glance gesture as a way to wake Glass at any time (instead of tapping the touchpad or lifting the head), or perhaps accompanied with a very subtle nod downwards. He agreed that this option would be less socially awkward than the aggressive head-bobbing that is sometimes required to activate the head-tilt wake (one of the reasons I don’t really use that feature).
Talking with Charles was reassuring, as sometimes the Glass PR team and Glass Guides do a really poor job of communicating with the Explorer community and can be patronizing and vague at times. His candid nature and the honesty in which he talked about current shortcomings of Glass was refreshing. He seemed sincerely interested in the feedback that I had as well as that of other Explorers, asking many Explorers what they liked least about Glass as well as how they used it.
Jeff Harris is a product manager on Glass focusing on the MyGlass store, the app review process, and developer relations. He works alongside Timothy Jordan, and they both gave a short talk on Thursday that I’ll elaborate on later. However, I learned a lot more talking to him at the party. Like Charles, Jeff was very candid and open to talking about things that were going well for Glass as well as things that haven’t been. He said that as a user himself, he’s been using Glass less lately as the KitKat firmwares have taken a severe hit on performance and stability. I asked him what trends he noticed in terms of the overall number of Googlers wearing Glass, and he admitted that it was down, likely for the same reasons (I asked this question of my two apartment mates who intern at Google and got a similar response). He said that the team is clearly aware of the things they need to do to fix that, and are actively prioritizing those things over anything else right now. i asked Jeff about the future of the platform, and while he unsurprisingly didn’t reveal any juicy details, he did say that an SoC refresh is almost certainly going to happen before a consumer release occurs. I talked a lot about some of the GDK features that I’ve been using and what else I’d like to see (like Static Card support restored for GDK apps). He was an interesting guy to talk to and reaffirmed for me that the Glass team does ‘get it’ and that despite the fact that the public communications are sometimes unhelpful, the team itself is made up of people who have a good perspective on the current state of Glass and the developer community.
Timothy Jordan, definitely quite a character. He and his hat were both present at the party. After I introduced myself he did remember the previous email exchange we had a while back talking about the class. He said that if he or any other members of the Glass team were going to be down in LA during the semester, that he’d definitely be interested in coming in to talk to us. He told me to email him next week once I/O dies down, which I’ll do – hopefully this pans out. Would be great to get some additional support and visibility from the Glass team. As usual, Timothy was the king of non-answers to questions, so I didn’t glean anything particularly interesting out of him. I did ask about the current timeframe for Glassware submissions, and if they were open to reviewing apps early and providing feedback to smooth out the process (incidentally, both of these topics would be covered in his presentation two days later). He said that one month is typical right now, but that they are adding staff and streamlining the process to speed it up a bit. They are open to reviewing app designs early and often, even before coding starts, so he encouraged us to be proactive in doing that for our development in the class.
Aside from the Googlers I spoke to, I ran into a group of guys from Phandroid that I recognized (all explorers) as well as the developer of Winkfeed, an app for Glass that was released in December that essentially aggregates news content from a bunch of RSS feeds and pushes them to Glass via the Mirror API.
At I/O itself, I attended three Glass-related sessions: Wearable computing with Google, given by Timothy Jordan, Distributing your Glassware, given by Timothy Jordan and Jeff Harris, and Innovate with the Glass Platform given by Hyunyoung Song and P.Y. Laligand (they gave the excellent Hacking Glass session at I/O13).
The first session was fairly light content-wise but did feature one major announcement: Android Wear is coming to Glass. This is a pretty huge deal, as it means that all existing Android apps that support push notifications will instantly start working on Glass much as they do on Android Wear watches. To enable really useful functionality, developers need to use the new Wear APIs to support rich actions and additional information to be presented through these notifications, but the work involved is far less than writing a separate app for Glass just to receive notifications (it’s also much better than the Mirror API for this purpose). This may or may not have major implications to what we develop in the class, but it’s a major development nonetheless. Timothy said that this will show up on Glass in the next couple of months, which was corroborated by what Jenny Murphy (Glass developer advocate) told me when I was talking to her after the session I also asked her if Glass will support sending Wear notifications as well, but she said they are planning just on supporting receiving for now. This probably makes sense, since the phone is designed to be the hub to which the watch and Glass connect. During the Q&A portion of the session, I asked Timothy about how Wear notifications will impact the flow of the Glass Timeline, which traditionally has been split into two areas: the area to the left of the home card, which features ongoing or upcoming information, and the area to the right, which features content that has occurred in the recent past. Not all apps behave this way on Android, so it will be interesting to see how Wear notifications are worked into the Glass Timeline. As expected, Timothy didn’t really have a real answer for this (he didn’t really have an answer for most of the questions asked as far as I could tell. I get the feeling that the Android Wear on Glass plan was established fairly recently). The Wearable computing with Google session is viewable here.
Distributing your Glassware was brief but useful as it gave me some more perspective into how the app submission process for Glass works and how it differs pretty heavily from the Play Store submission process for Android. Unlike Android apps, Glassware apps are heavily scrutinized by the team because they want to make sure that the UX is fairly consistent with how Glass is designed. One of the things they announced at this session was the availability of a new Flow Designer tool, which allows screenshots/mockups of Glass apps to be laid out in a logical way and then submitted to the Glass review team for feedback even before actual development starts. Throughout the submission process, they will now be delivering feedback more rapidly piece-by-piece instead of making developers wait a month or longer before hearing back. I asked a questions about uploading new APKs – this must be done manually through a web form, although updated app versions are usually approved very quickly. I asked if they planned to offer any features similar to the Beta and Alpha channels on the Play Store – they don’t have this specifically, but they will whitelist certain Glass devices if you want to launch to the MyGlass store but keep the tester base controlled to start. I asked if they offered aggregated crash reports and other stats – they don’t, so I asked if they recommended using third party services like Crashlytics, and they thought this was advisable. They said that they will reach out to developers if they notice a lot of crashes from an app, but they don’t offer anything in real time. Jeff indicated that they are working on improving the dashboard that they offer developers, but that it’s much different than the Play Store and the team is much smaller. In response to another question, Timothy explained how GDK apps that need account authorization are deployed to Glass – after the APK is installed, the account that is authorized by the user from the MyGlass app on their phone is pushed to the device. For development purposes, this can be done over ADB, something I didn’t know.
Innovate with the Glass Platform was pretty exciting – while no new announcements were made, the demos that H.Y. and P.Y. gave were very cool and definitely showcased some more unique use cases for Glass (ironically, neither of their apps would be approved on the MyGlass store right now as they used Eye Gestures and USB OTG mode which aren’t currently allowed). The session began with an overview of the GDK layer and how it compares in size and scope to the regular Android SDK – the emphasis was that it is fairly small, and just meant to augment what is already available on Android. They gave a brief look into the modifications the Glass team made to the Android OS to make it work on Glass, specifically in Location (GPS data is piped through from the phone) and the Account manager (unlike Android on phones, accounts for apps must be provisioned on the user’s phone with MyGlass and then pushed to Glass after the APK is installed).
P.Y. demonstrated an app he had built with some other Glass team members that connected to an Adidas soccer ball via Bluetooth LE and gave instant feedback after a kick using data collected from the ball. He walked through the implementation of Bluetooth LE connectivity, using the wink sensor (not currently allowed on the MyGlass store), setting up voice triggers, CardScrollView, Card api, etc.
H.Y. demonstrated a pretty hacky setup that involved a webcam mounted on the back of her bicycle helmet, connected to Glass via ADB. Her app displayed both front and rear camera feeds on a LiveCard and allowed the user to switch. Apparently Glass puts out 300mAh over USB in OTG mode and most webcams require 500mAh so she had to use a pretty bulky setup involving a battery pack and a USB hub. It was an interesting demo that inspires some thinking about what other peripherals might be useful to explore on Glass.
In response to questions, P.Y. said that they are aware that Bluetooth keyboards and other devices no longer work post-XE16, and that they will be fixing it eventually (so it sounds like this was not intentional breakage). Android L will be coming to Glass – in fact, this may be required in order to support Wear, although I’m not sure if I understood this correctly (I’m also not sure if the Android Wear watches run KitKat or L – I’m pretty sure it’s not L).
Innovate with the Glass Platform can be viewed here.
Google I/O was exciting for a vast number of reasons, and the developments surrounding Glass show that the Glass team is working hard and has their priorities straight. I’m looking forward to the next few months as performance and stability improve, some of the most requested features make it out, and we start pushing the envelope in the class this fall.