page.title=Android Wear @jd:body
Designing apps for wearable devices powered by Android Wear is substantially different than designing for phones or tablets: different strengths and weaknesses, different use cases, different ergonomics. To get started, you should understand the overall vision for the Android Wear experience, and how apps fit into and enhance this experience. We've also provided source files for UI resources that you can use in your own apps in the Downloads section.
UI Toolkit, Flows, and Mocks
A new form factor deserves a new UI model. At a high level, the Android Wear UI consists of two main spaces centered around the core functions of Suggest and Demand. Your app will have an important role to play in both of these spaces.
The context stream is a vertical list of cards, each showing a useful or timely piece of information. Much like the Google Now feature on Android phones and tablets, users swipe vertically to navigate from card to card. Only one card is displayed at a time, and background photos are used to provide additional visual information. Your application can create cards and inject them into the stream when they are most likely to be useful.
This UI model ensures that users don’t have to launch many different applications to check for updates; they can simply glance at their stream for a brief update on what’s important to them.
Cards in the stream are more than simple notifications. They can be swiped horizontally to reveal additional pages. Further horizontal swiping may reveal buttons, allowing the user to take action on the notification. Cards can also be dismissed by swiping left to right, removing them from the stream until the next time the app has useful information to display.
For cases where Android Wear does not suggest an answer proactively through the context stream, the cue card allows users to speak to Google. The cue card is opened by saying, “OK Google” or by tapping on the background of the home screen. Swiping up on the cue card shows a list of suggested voice commands, which can also be tapped.
At a technical level, each suggested voice command activates a specific type of intent. As a developer, you can match your applications to some of these intents so that users can complete tasks using these voice commands. Multiple applications may register for a single voice intent, and the user will have the opportunity to choose which application they prefer to use.
Applications can respond to a voice command in the same way as they can respond to a tap on a regular in-stream action button: by adding or updating a stream card, or by launching a full screen application. Voice input often takes the form of a command, such as "remind me to get milk," in which case a simple confirmation animation is sufficient to display before automatically returning to the Context Stream.