Google Glass is a wearable computing device that responds to both touch and voice commands. This version is the prototype Explorer model. Brooks Kraft/CorbisString theory proponents suggest that our universe might consist of at least 10 dimensions, though we are only able to directly perceive three spatial dimensions. We also experience time as the fourth dimension. Any further dimensions remain theoretical and can only be inferred through advanced mathematics. There may be universal mysteries beyond our reach to observe.
Even without considering string theory or dimensions beyond our comprehension, our world holds a vast array of information that many of us overlook in daily life. For instance, when exploring a new city, we typically rely on our senses for information. However, tools like smartphones and computers can provide an expanded understanding of the city's geography, history, economics, cuisine, and culture.
Augmented-reality apps superimpose digital information onto the real world around us. With one of these apps on your phone, you can point your camera at a city street and receive real-time details about your environment.
While these augmented reality apps are both educational and fun, the current design still feels a bit cumbersome. We're still holding our phones up and staring at screens – it’s like being on a 'Star Trek' away mission but stuck using a tricorder while missing the view.
Google is among the companies working to solve this issue with a wearable device that resembles glasses, but with one side of the frames noticeably thicker than the other. Known as Google Glass, it could offer a window into a new digital reality — or make you resemble the geeky Terminator always picked last for robo-dodgeball.
The Birth of Google Glass
Mac Smith and Mitch Heinrich, both members of the Google X team and contributors to Project Glass, introduce another groundbreaking concept, Project Loon, at the Googleplex in Mountain View, California. Talia Herman/CorbisOne of the many divisions at Google is Google X, often described by visitors as a mix between a computer lab and a mad scientist's lab [source: Miller and Bilton]. The projects tackled at Google X are ambitious, ranging from smart homes to space elevators. Project Glass is one of the many ideas explored within this division.
In April 2012, a Project Glass account surfaced on Google's social platform, Google Plus. The account’s debut post unveiled the project’s goal — to create a wearable computer designed to help you 'explore and share your world' [source: Google Glass]. Along with the announcement, a concept video was released showcasing the potential functions of the glasses.
Additional posts and articles revealed more details about the glasses. Some models had no lenses, but all versions featured a notably thick area on the right side of the frame. This section housed the screen for the glasses. To view the screen, users would need to glance upward. The screen’s strategic placement was crucial to avoid serious safety risks associated with placing it directly in the user's line of sight.
Shortly after releasing the concept video, Google allowed people to see a pair of the glasses in person. In the spring of 2012, Google co-founders Sergey Brin and Larry Page sported the advanced glasses at various events. At the Google I/O event on June 27, 2012, the company gave attendees an exciting live demo of the technology.
The Google I/O event took place at the Moscone Center in San Francisco, but the first part of the demonstration occurred outside, thousands of feet above the building. Google equipped a skydiving team aboard a blimp with Google Glass and set up a Hangout — a video chat on the Google Plus platform — with the team. The footage captured by the Glass cameras followed the team as they jumped from the blimp and skillfully landed on the roof of the Moscone Center.
The demo didn’t stop there. Expert cyclists, also wearing Glass, performed stunts on top of the Moscone Center’s roof until they reached the edge. A man in Glass rappelled down the side of the building, passing a package to another biker, who then rode through the conference center to deliver it to Sergey Brin on stage.
The audience watched the entire spectacle on a giant screen as footage from the various Glass cameras played out before them [source: Google Developers]. Afterward, members of the Google X team responsible for the project explained the philosophy behind the glasses. Brin returned to the stage to announce that Google would begin shipping a developer version of the glasses, called the Explorer Edition, in early 2013 for $1,500. While this is the current cost for the Explorer glasses, it may not reflect the final price when Google Glass is released to the general public.
What Google Glass Does
Dr. Patrick Hu, an anesthesiologist, utilized Google Glass in 2013 to share EKG data with other doctors as part of a pilot program at UC Irvine. This could soon become a common practice among medical professionals worldwide, enabling them to collaborate on patient care. Cindy Yamanaka/ZUMA Press/CorbisWhen the Explorers beta program launched, Google Glass owners gained the ability to use their glasses for the following features:
- Receive reminders for appointments and calendar events.
- Get alerts for social network updates or text messages.
- Access turn-by-turn navigation.
- Receive notifications about travel options like public transport.
- Stay updated on local weather and traffic conditions.
- Capture and share photos and videos.
- Send messages or activate apps through voice commands.
- Perform Google searches.
- Engage in video calls via Google Plus.
As of early 2014, Google Glass does not yet overlay digital information on physical locations. However, imagine looking at a building and seeing the names of the businesses inside or glancing at a restaurant and accessing its menu. With the right apps, Glass could filter and provide a variety of relevant information.
For instance, imagine you're wearing Google Glass in London and gaze at the new Globe Theatre. The device could offer you options to learn about the history of the original Globe Theatre, the modern version that opened in the 1990s, or even current productions on stage. Google Glass could give you access to all that information right when you need it.
Looking ahead, Google Glass may one day help you track the people in your life or discover more about those you meet. With facial recognition technology and social networking, you could glance at someone new and immediately see their public profiles across various social platforms. (If that raises concerns for you, you're not alone — we’ll delve into the debates surrounding this feature shortly.)
Google Glass is compact yet filled with chips, sensors, and feedback systems. Let’s take a deeper look at its inner workings — or, more specifically, what’s behind the lens.
What makes Google Glass work?
In 2012, Sergey Brin, co-founder of Google, used the button atop his Glass to snap a photo at the Sun Valley Conference. Though he had to wear standard sunglasses over his Glass at that time, Google revealed several Glass-compatible frames in early 2014. Kevork Djansezian/Getty ImagesIf you were to dismantle a Google Glass, two things would likely happen: you'd uncover the technology inside that makes the glasses function, and you'd immediately regret destroying a $1,500 device. Luckily, others have already done the disassembling for you.
Google Glass can be controlled in a few different ways. One of the methods is by using the capacitive touchpad located on the right side of the glasses. This touchpad works by responding to changes in capacitance, which is the subtle electrostatic field that appears across the screen. When you touch the pad, the change in capacitance is detected by a controller chip, which registers your action. Horizontal swipes allow you to scroll through menus, while swiping down either backs out of a choice or, if you’re on the main menu, puts the glasses into sleep mode.
Another method of control is voice commands. A microphone in the glasses picks up your voice, and the microprocessor interprets the instructions. However, you can’t just say anything — there’s a list of approved commands, and nearly all of them start with "OK, Glass," which signals that a command is coming. For instance, saying "OK, Glass, take a picture" instructs the processor to capture an image of what you're looking at.
As of early 2014, the processor inside the Explorer version of Google Glass is a chip from Texas Instruments. It’s an Open Multimedia Applications Platform (OMAP) chip, which belongs to a class of microchips known as systems on chip. This means several components, like an ARM-based microprocessor, video processors, and a memory interface, work together. According to Texas Instruments’ specs, the chip can handle 1080p video at 30 frames per second.
The main circuit board houses a SanDisk flash drive with 16 gigabytes of storage, although only 12 gigabytes are available to users. Micron Memory (formerly Elpida) supplied the dynamic random access memory (DRAM) chip, which not only stores media and apps but also provides the memory necessary for the microprocessor to run the software on the Glass.
Although you can use Google Glass to take photos and videos offline, to fully unlock its potential, you'll need to connect it to the Internet. You can do this through Bluetooth (by pairing with another device like a smartphone) or WiFi. A single chip in the device supports both connection methods. Additionally, the SirFstarIV chip provides global positioning system (GPS) functionality, allowing Google Glass to pinpoint its location using satellite signals [source: Torborg and Simpson].
Cameras, Speakers and Sensors, Oh My!
The prism in Google Glass directs images towards your eye. This technology was demonstrated at a public event in Austin, Texas in 2013.
Bob Daemmrich/CorbisWhile the internal components of Google Glass are fascinating, the most striking feature is the prism-like screen. When not in use, it looks like a transparent prism. Looking from above, you can see a diagonal line splitting the prism's width. This line represents the angled layer inside the prism, which functions as a reflective surface.
Images from Google Glass are projected onto the prism's reflective surface, which directs the light towards your eye. These images are semi-transparent, allowing you to see through them to the real world on the other side. As of early 2014, the display's resolution is 640 by 360. While it may not be high definition, the close proximity to your eye makes it appear clearer than you'd expect from such a low resolution.
If you glance just to the side of the display towards the edge of the glasses, you'll notice a camera lens. Google states that the camera can capture photos with a 5-megapixel resolution and record videos in 720p resolution.
Google Glass features a bone conduction speaker, which sends vibrations through your skull to reach your inner ear, eliminating the need for earphones or headphones. By combining the camera and speaker, you can make video calls. However, keep in mind that the person on the other side will see exactly what you're looking at, as the glasses only have a forward-facing camera.
In addition to the camera and speaker, Google Glass is equipped with a proximity sensor and an ambient light sensor. These sensors help the glasses detect if they are being worn or removed. You can set your Glass to automatically go into sleep mode when removed and wake up when put back on. They can also recognize actions like winking, enabling you to trigger commands, such as 'take a picture,' with just a wink. (Yep, no creepiness involved.)
Another sensor found in Google Glass is the InvenSense MPU-9150, an inertial sensor that detects motion. This sensor is useful in various functions, including one that allows you to wake up the Glass from sleep mode simply by tilting your head back to a preset angle.
All of the chips and functionalities within Google Glass require power to function, and this power comes from a battery tucked away in the wider section of the stem, behind your right ear. It's a lithium polymer battery with a capacity of 2.1 watt-hours. Google claims that it takes just 45 minutes to charge the battery when using the included charging cable and plug [source: Torborg and Simpson].
Short-sighted glasses?
One of the key selling points of Google Glass is its discreet design, but could this very feature turn into a major security or privacy concern?
Brooks Kraft/CorbisNot everyone has embraced Google Glass with open arms. Experts like Internet security professional David Asprey have expressed concerns regarding the product and its potential consequences.
A portion of Asprey's concerns arises from the language in Google's terms of service for Google Drive, a cloud-storage platform. Within these terms, this clause appears [source: Connelly]:
On first glance, this passage suggests that any content uploaded to Google Drive essentially becomes Google's property. However, Google explains that this provision enables them to display your data in various ways [source: Google Terms of Service]. For instance, if you upload a file to Google Drive and opt to make it public, Google can display this file in search results related to specific terms. The clause grants Google the ability to show parts of the file within those search results.
Asprey's argument is that such clauses seem to grant Google more control over user data than is justified. He also expresses concern that with facial recognition technology, the glasses could pose significant privacy risks.
Another worry is that Google might use these glasses as a means to gather personal information and target users with ads. While wearing the glasses, Google could track your activities and location, building a virtual profile that enables them to serve relevant advertisements directly to the display on your glasses, based on your behavior.
There are also concerns that receiving updates from social media directly in your line of sight could distract you from other important tasks, such as driving. In response, Google explained that the screen in the Google Glass eyewear is positioned above the frame, so you'll need to glance upward to view it. This design ensures that notifications like text messages or pop-up ads won't block your vision while you're wearing the glasses.
Although the Explorer program is well underway, it's still too early to determine whether these concerns are valid. It's possible the glasses may never be available for consumers. However, privacy advocates urge that we begin considering the potential consequences now, before they become serious issues in the future.
Google has already addressed some of these concerns and invites feedback. It's expected that the consumer version of the glasses — if the project reaches that stage — will differ from the prototypes. By then, Google may have discovered ways to offer a more immersive data-driven experience while safeguarding privacy.
