Home > Uncategorized > My first day developing for Google Glass

My first day developing for Google Glass

I was at the Google Glass Design Sprint & Workshop in London today. I don’t own a Google Glass and applied for one of the limited spaces available to developers who would be lent hardware for the day. Any idea I was harboring of Google recognizing me as an ace hackathon attendee were dashed at the start when we were told that the available slots had been filled by a random draw of applicants.

Vendor presentations at the start of hackathons tend to be either deadly dull or eye opening. Timothy Jordan explained why software written for Google Glass were not Apps, or rather should not be written with this mindset, but needed to be thought of in terms of enhancing the user’s experience in real time the moment; this really clicked with me. He also made some excellent points on user interface issues specific to the glass form factor which I think went over the head of most people present (this really needed its own slot).

I turned up with an App user enhancement experience reasonably well formed in my mind. The idea was to port the numbers tool to Android and have it scan the incoming camera image for numbers, information about the interesting ones being spoken into the users ear (e.g., that number of there is the rest mass of the electron).

On the day Google handled out a half a dozen brief biographies of potential Glass users and asked us to come up with ideas for software to enhance the lives of these people. I came up with the idea for helping the triathlete on the cycling leg of his competition. Having watched highlights from the Tour de France I knew that corners on the downhill stages of mountain routes presented a significant problem to riders traveling at up to 65 mph, i.e., how hard should they break to get safely around a corner whose curvature they could not see. My idea was for the corner curvature user experience to come to life when the riders speed exceeded, say, 45 mph and displayed a simple colored wiggly line that represented what lies around the bend.

Listening to other people at my table and in other groups I was surprised at how many were designing their idea as an App; that is, they wanted user to select from drop down menus and/or specify various numeric/literal values. My pointing out that they were designing Apps was met with blank stares.

Progress on writing actual code was hampered by lunch, having to leave at 17:30 and adb not working out of the box under Windows (this prevented any communication between the Android SDK running on Windows and Google glass). It took a while to figure out that the problem was adb/Windows (the Google folk had no idea it did not work since they all used Linux or Apple Macs). As usual an answer on Stackoverflow explained what changes needed to be made to the Google software. Asking around uncovered a few people with horror stories to tell about getting adb communication under Windows.

Microsoft Windows has significantly slipped in developer tool mind share over the last few years (I am even thinking of buying my first Mac next time I change my laptop). However, there are still a lot of Windows developers out there and Google will need to fix this problem if they want to attract lots and lots of developers.

But the biggest mistake Google need to fix is to make sure they don’t ever again run out of coffee mid-afternoon at an all day hackathon.

  1. July 20, 2014 14:10 | #1

    Good review Derek,

    I am Surprised that you went down there with an App in mind. I did not attend to go as I got email saying that I was not among the randomly selected hackers.

    Understanding the machine (Google Glass hardware) and its interface are part of the undocumented aspect that I don’t Google is preparing to unveil to the public. Why?
    1.) You can programming the glass to control robot, vehicle, devices, etc.. in an autonomous fashion using other tools from Google SDK and APIs like location based, Map based, GPRS, character recognition, colour recognition, object mapping and rejection, distance and proximity mapping, data segmentation, etc.. So i hope you see what I am trying to point out here. Now that Google is into Drones and Driver less Cars/vehicles then there interest will be on the brink and need to protect intellectual aspect and investments made so far.

    2.) Not all hardware are the same or to put it another way not all hardware are PCs or Mobile-phones so what is deemed an App in one environment may not be in another environment. So in a nutshell Mobile Apps are the same as application for the Google glass. The glass operates in a client/server mode and display on a tiny mirror glass for feedback to user but it can used not only by human beings but can also be coupled to a machine and can be completely detached from the spectacle holder or attached to other sensors. I play with hardware and embedded systems a lot so Google glass is not new in the embedded devices field but just a reincarnation of that areas of development as the internet of things echo is nothing new to be honest.

    There other glasses that are coming out but consumer acceptance will be very low as the excitation quickly die down after few weeks of use unless you have a particular area of application or business benefit. This a big problems of micro devices and non bi-focus devices like Google glasss, oculus rift and other augmented tools. Mostly stops at the entertainment side of life. I have loads of robot at home and I teach robotics and embedded device at our college. Most people have expectation in mind of what the hardware should do and not do but they could be let down when the opposite happens.

    Google glass is just an embedded device with a limited and constrained capability. Making it your very own device means hacking it, void your warranty but the cost of it is still making people to protect their adorable device.

    God blesses!!!

    Best regards,
    Sanyaade

  1. July 21st, 2014 at 22:51 | #1