Quantcast
Channel: Intel Developer Zone Articles
Viewing all 461 articles
Browse latest View live

Windows* 8.1 Preview – what’s new for developers

$
0
0

Downloads


Windows* 8.1 Preview – what’s new for developers [PDF 327KB]

Windows 8.1 Preview is out, and many people are testing and checking the new features and capabilities. As an update of a major version, 8.1 has no big changes in the way developers create their applications, instead it has more capabilities and a few small additions to API functionalities. All the changes are well documented by Microsoft on MSDN here and here, but we will highlight a few interesting additions or changes that caught our attention below.

Text to Speech


The new Windows 8.1 Preview introduces a new API for Speech Synthesis, Windows.Media.SpeechSynthesis that provides text to speech functionality for Windows Store apps. At the time of writing this article, 16 languages are available – including English, Chinese, Spanish, German, and Portuguese. Eventually you will be able to set the gender of the voice with the Windows.Media.SpeechSynthesis.VoiceGender variable, but at this time only David is available as male voice, for English Speech Synthesis. 

Text to speech adds a nice touch for users, especially users with accessibility impairments. Some research has provided evidence that for many people, synthesized speech makes computers sound as likable as a human voice (Stern et al. 2006). Your application can take advantage of this functionality and improve the overall user experience.

In the new batch of code samples for Windows 8.1 Preview, check the Speech Synthesis sample at http://code.msdn.microsoft.com/windowsapps/Speech-synthesis-sample-6e07b218.

DirectX*


Windows 8.1 introduces DirectX 11.2, and with this update brings several improvements and new functionalities.

HLSL Shader linker

Graphics programmers will like this improvement. By adding separate compilation and linking of HLSL shaders, the new DirectX 11.2 allows programmers to create HLSL libraries and link them into full shaders at run-time. Several steps must be done including creating an ID3D11Linker object, loading the libraries (or shader) blobs with  D3DLoadModule, and instantiating module objects with  ID3D11Module::CreateInstance. Windows 8.1 Preview also introduces the function linking graph (FLG) to construct shaders consisting of a sequence of precompiled function invocations passing values to each other. This enables using C++ API calls to program shader structures.

For a full example on how to use the new HLSL Shader linker, check out this code example at http://go.microsoft.com/fwlink/p/?LinkID=310061.

Other highlights of Windows 8.1 Preview:

  • Direct3D* low-latency presentation API: Windows 8.1 Preview includes a new set of APIs for DirectX apps to present frames with lower latency, allowing for faster UI response.
  • Multithreading with SurfaceImageSource: Apps can also access and update SurfaceImageSource (SIS) XAML elements from a background thread. Get the XAML SurfaceImageSource DirectX interop sample at http://go.microsoft.com/fwlink/p/?LinkID=310060.
  • Interactive DirectX composition of XAML visual elements: You can use the SwapChainPanel class to render DirectX graphics content in an app that uses XAML; SwapChainPanel is similar to SwapChainBackgroundPanel in Windows 8, but with fewer restrictions on XAML tree placement and usage. Get the XAML SwapChainPanel DirectX interop sample at http://go.microsoft.com/fwlink/p/?LinkID=311762.

UI elements


Windows 8.1 Preview introduces new sizes of tiles and new ways to display more than two applications at the same time.

The minimum default users can resize application windows is 500x768px. However, depending on your application, you may want to offer even smaller window sizes, down to the non-default minimum of 320x768px. Note that while the application can become narrower, the height stays the same.

Your application must handle showing the appropriate layout for the different available sizes. Using the available templates in Visual Studio* helps to guarantee that, but you should also be sure your images and icons are available in quality scalable formats, like SVG.

Tiles are also available in different sizes, and the user can modify them. It is also a good idea to check that you have the image shown in your application tile available in all possible sizes, and include them in your package. It will make your application look polished and match the development efforts you put into the programming.

New Controls


New UI control elements are introduced with Windows 8.1 Preview, and provided here are a few examples, divided by framework:

JavaScript*

Hub

The Hub control separates the content into different sections and different levels of detail. This pattern is best for apps with large content collections or many distinct sections of content for a user to explore.

Repeater

More than a replacement for the ListView control, a Repeater is a flexible, easy-to-use library to generate HTML markup from a data set.

ItemContainer

This control makes it easy to create interactive elements that provide swipe, drag-and-drop, and hover functionality. Just add your content inside the ItemContainer.

BackButton

As the control name suggests, it’s a button that provides backward navigation within your app. The BackButton can work the navigation stack and disable itself if there is nothing to navigate back to.

XAML – C#/C++

AppBar

Using XAML makes creating an app bar with command buttons easier reflecting the platform design guidelines and behaviors. The default appearance of the icons is a circle, the content is set through Label and Icon instead of Content, and it’s possible to compact everything with the IsCompact property. Check out the sample code here http://go.microsoft.com/fwlink/p/?LinkID=310076.

Hub

The same as described before. See the XAML sample code here http://go.microsoft.com/fwlink/p/?LinkID=310072.

Flyout

The Flyout control displays information or a request for user interaction. Different than a dialog however, a flyout can be dismissed by clicking or tapping outside of it. Windows 8.1 Preview also adds specialized instances of controls that act like flyouts: MenuFlyout and SettingsFlyout. You can see for example a MenuFlyout sample here http://go.microsoft.com/fwlink/p/?LinkID=310074.

Desktop apps


Windows 8.1 Preview also includes some API improvements for developers programming or supporting native apps for Desktop.

High DPI support

Improved support for high DPI monitors (200+ DPI). Apps can take advantage of high-DPI and can change their pixel density if moved to a lower DPI monitor.

Direct manipulation

Updates to DirectManipulation APIs increase app responsiveness and add ways to interact with apps.

  • Cross-slide is a new gesture used for selection and for initiating drag-and-drop via touch.
  • Autoscroll allows users to automatically scroll once they reach the end of visible content.
  • Native support for panning and zooming via a touchpad helps users without touch displays take advantage of touch capabilities of Windows 8.1 Preview.

Considerations


Like we said in the beginning, these are only a few examples to give you an idea of what the updates and new functionalities are for Windows 8.1 Preview. If your curiosity is now peaked, download the Windows 8.1 Preview here and access the official Developers Feature Guide here.

References


Stern, S. E., John W. Mullennix, and Ilya Yaroslavsky. (2006) Persuasion and social perception of human vs. synthetic voice across person as source and computer as source conditions. International Journal of Human-Computer Studies, Vol 64(1), pp. 43-52.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

 


A Different Kind of Dual Screen Experience - Control & Consume

$
0
0

Downloads


A Different Kind of Dual Screen Experience - Control & Consume [PDF 112KB]

Single Application, Separate Uses


I hate trying to use my mouse on my TV. I struggled with this problem first with my first BluRay* player, which happened to be inside my new computer. I was so excited to finally experience BluRay. However, pausing the movie soured the improved picture experience. I had to get up, walk over to my computer, mouse to the BluRay software’s control panel on the TV, and then hit the teeny-tiny button from 10’ away. I tried both a wireless mouse and a Windows* Media Center remote and it was still not satisfying. The space between me and the bigger screen makes accuracy with a pointer very difficult. I am asking for the option to have a better 10’ experience with my applications

Mouse Control from 10 Feet

Remote Control from 10 Feet

Figure 1 Home Dual Screen Setup

 

At work, I have two screens about 2’ from me with the desktop is expanded. I use these screens as one larger screen, keeping multiple panes of multiple programs open at once. This model works great for 2’ control, not for 10’ consumption.

Figure 2 Workplace Dual Screen Setup

 

Figure 3 Extended Desktop View

When I leave the cubicle, I prefer less work and more enjoyment on my TV. Big screens are better for most things, but they fail in most interactive experiences, unless the application - like a game - is designed specifically to take the distant control into account. I watch sports and movies on my big screen TV. That’s pretty much it. No games for me. Since I have my computer hooked up to TV, getting a mouse to the right spot is a pain. Watching recordings on a screen that’s not 2’ in front of me is great. Controlling the computer software and picking which show to watch on that distant screen is NOT great at all. My challenge to all you developers out there – make my experience better!

I want to launch my movie and control it from my local “2-foot” screen, while doing other things on my computer. I want more software to take full advantage of that entire reserve horsepower. What can be added to applications to let all of us do more on the PC? I’d like to be able to organize my movie collection while watch a movie – in the same application.

We all know programs that take advantage of dual-screen setup. Powerpoint* or Keynote* presentations are often run in “Presenter’s View,” which gives the presenter access to their speaker notes and the presentation independently.

The photographer’s workflow application, Adobe* Lightroom*, has become one of my mainstays. When I review my images with clients at home, Lightroom displays an uncluttered view of an image on my big screen while I manage the metadata/details/edit the session’s collection on my smaller screen.

When the customer says they like the image, I rate the image and add print sizes on the screen that is closer without distracting the customer’s review experience.

Work Screen View

Customer Screen View

Figure 4 Lightroom Dual Screens – Single Application, Separate Views

When projected large, like the size of a wall, the images are stunning and clear. I can almost see the orders in the twinkle of my customer’s eyes as she views her wedding day.

If you don’t have access to any of the programs I’ve mentioned, and would like to experience the “control from 2’, consume from 10’ model” you can download an application Intel recently released. Intel® WiDi Media Share Software for Windows enables you to browse and share photos and videos on your computer with a second screen while controlling the second screen’s content from the first.

While this technology may look attractive to many YouTube dads like me, my photographer side is jazzed that it improves my client’s review experience and in turn, my income.

Download Intel® WiDi Media Share Software for Windows here.
More about Intel® Wireless Display Technology (Intel® WiDi) here.

About the author


Tim Duncan is an Intel Engineer and is described by friends as “Mr. Gidget-Gadget”. Currently helping developers integrate technology into solutions, Tim has decades of industry experience, from chip manufacturing to systems integration. Tim holds a Bachelor’s degree in Computer Science, plays bass (sousaphone too). He loves his family, dogs and the outdoors. Find him on the Intel Developer Zone as Tim Duncan (Intel)

Notices


Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Intel, the Intel logo, Ultrabook, and Core are trademarks of Intel Corporation in the US and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

4th Generation Intel® Atom™ Processor-Based Tablet Overview

$
0
0

Introducing the next generation Intel® Atom™ Processor (Code named “Bay Trail”)

Abstract


Intel has launched its latest Intel® Atom™ processor, code named “Bay Trail”. It is the first Intel Atom processor based on 22-nm technology. This article discusses the key features of the platform like extended battery life, Intel® Gen7 graphics architecture, advanced imaging and video, improved performance, security, and more.

Platform Overview


The new processor offers Intel level performance for apps, games, photos, videos, and web browsing in the new thinnest/lightest/coolest form factors. The Intel Atom processor is optimized for tablets and 2 in 1 devices. Tablets based on the new Intel Atom processor support multiple cameras with excellent camera quality and feature integrated image signal processing for both still and video image capture. The table below shows the “Bay Trail” improvements.

Base Feature

Intel® Atom™ Processor Z2760 (“Clover Trail”)

Intel® Atom™ Processor Z3000 Series (“Bay Trail”)

Process Technology

32 nm

22 nm

Processors

2 Cores/2 Threads

4 Cores/4 Threads

Graphics

Intel® Graphics Media Accelerator

Intel® HD graphics (Gen 7 gfx)

Compute Performance

1X

Up to 2X1

Graphics Performance

1X

Up to 3X2

Memory

Up to 2 GB x32 LPDDR2 800 (6.4 GB/s memory bandwidth)

Up to 4 GB x64 LPDDR3 1067 (17.1 GB/s memory bandwidth)

OS Support

Windows* 8

Windows 8/8.1 and Android

Comparison of Clover Trail vs Bay Trail features 

Intel Atom processor feature highlights


First-ever 22-nm Intel Atom processor

The new first-ever, 22-nm Intel Atom processor is a quad-core system on chip (SOC) with 4 cores/4 threads. With the CPU, graphics, and memory in one package, this modular design provides the flexibility to package a high-performance processor and graphics solution for multiple form factors.

Enhanced battery life

The new processor offers active battery life of more than 10 hours and standby performance of approximately 30 days3.

Graphics and Media Performance

The latest Intel Atom processor includes a 7th generation Intel® GPU with burst technology to provide a stunning graphics and media experience. The new processor supports high resolution displays up to 2560X1600 @ 60HZ and supports Intel® Wireless (Intel® WiDi) technology through Miracast. Seamless video playback is supported by a high performance and low power hardware acceleration of media encode and decode. The table below compares the two processors’ graphics features.

Intel Burst Technology 2.0

Automatically allows processor cores to run faster than the base operating frequency if they’re operating below power, current, and temperature specification limits.

Feature

Intel® Atom™ Processor (“Clover Trail”)

New Intel® Atom™ Processor (“Bay Trail”)

Graphics Core

SGX545

Intel® HD Graphics: Gen7 with 4EU

Video Decoder

VXD391

Gen7 Media Decode & VXD392

Video Encoder

VXE285 (up to 1080p30)

Gen7 (up to 1080p30)

Graphics Turbo

No

Yes

Display Ports

MIPI DSI, HDMI 1.3a, LVDS (via MIPI DSI bridge)

HDMI 1.4, eDP 1.3, DP 1.2

High-Bandwidth Digital Content Protection (HDCP)

1.3

2.1 (for wireless display) / else its 1.3

Display power savings features

Intel® DPST 3.0 (Intel® Display Power Saving Technology), DSR (Display Self Refresh), CABC (Content Adaptive Backlight Control)

Intel® DPST 6.0 (Intel® Display Power Saving Technology), PSR (Panel Self Refresh), DRRSS (Dynamic Refresh Rate Switching)

Intel® Wireless Display

No

Yes

Intel® Media SDK

Yes

Yes

Graphics Feature Comparison


Advanced Imaging and video

The new Intel Atom processor comes with an integrated image signal processor and supports excellent camera quality. It supports video capture at 1080p with full HD playback. Superior multi axis Document Image Solution (DIS) and image alignment extend High Dynamic Range (HDR) to moving devices hence removing the moving blur. Ghost removal is also extended from HDR to moving scenes.

Security Features

With people carrying their devices with them almost everywhere they go, they are more likely to lose their tablet or laptop. And even if they don’t lose them, devices are susceptible to the growing number of viruses and malware threats. Intel® Identity Protection Technology (Intel® IPT)4 can help businesses keep their critical information secure and protect against malware. Intel® IPT helps prevent unauthorized access to personal and business accounts by using hardware-based authentication.

New business-class tablets built with the Intel Atom processor Z3700 Series are specifically designed for the needs of business and the enterprise. Hardware-enhanced Intel® security technologies and support for software from McAfee offer robust security capabilities.

Intel® Wireless Display benefits on Intel Atom processor

Intel® WiDi enables content-protected HD streaming and interactive usages between tablets and TVs. It supports full 1080p video and low latency gaming, and is Miracast compliant Intel® WiDi can be used to link health indicators as well. A few of the capabilities of Miracast-enabled apps are:

  • Share & Enjoy: use a big screen HDTV to enjoy and share media with family and friends
  • Wireless: quickly and securely connect with standard Wi-Fi to a TV without cables
  • Easy Set-up: simple user interface makes it easy to connect; no additional remote controls
  • Portable: adapter is small and light, so solution can move with you

Resources for Developers


Below are links to some resources for programming on Windows 8 that can help you take advantage of the new Intel Atom processor features.

1: Optimize apps for touch: The latest devices with Intel Atom processors include touch screens. To learn more on how about UX/UI guidelines and how optimize app design for touch, see:

2: Optimize apps with sensors: Intel Atom processor-based platforms come with several sensors: GPS, Compass, Gyroscope, Accelerometer, and Ambient Light. These sensor recommendations are aligned with the Microsoft standard for Windows 8. Use the Windows sensor APIs, and your code will run on all Ultrabook™ and tablet systems running Windows 8.  For more information, see:

3: Optimize apps with Intel platform features: Take advantage of the security features such as Intel Anti-Theft Technology4 and Intel Identity Protection Technology with HD Graphics. Please refer to resources below for more information on each. For more information, see:

4: Optimize for visible performance differentiation: Intel® Quick Sync Video encode and post-processing for media and visual intensive applications. For more information, see:

5: Optimize app performance with Intel® tools: Check out the Intel® Composer XE 2013 and Intel® VTune™ Amplifier XE 2013 for Windows. These suites provide compilers, Intel® Performance Primitives, and Intel® Threaded Building Blocks that help boost application performance. You can also optimize and future-proof media and graphics workloads on all IA platforms with the Intel® Graphics Performance Analyzers 2013 and Intel Media SDK. For more information, see:


1 Claims for Intel® Atom™ Processor Z3770 (up to 2.40GHz, 4T4C Silvermont, 2MB L2 Cache) are based on an internal Intel® Reference design tablet which is not available for purchase: 10” screen with 25x14 resolution, Intel Gen 7 HD Graphics, pre-production graphics driver, 2GB (2x1GB) LPDDR3-1067, 64GB eMMC solid state storage, 38.5 Whr battery. Based on TouchXPRT, WebXPRT and SYSmark* 2012 Lite compared to Intel Atom Processor Z2760. Individual results will vary. Commercial systems may be available after future Windows updates. Consult your system manufacturer for more details. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.

2 Claims for Intel® Atom™ Processor Z3770 (up to 2.40GHz, 4T4C Silvermont, 2MB L2 Cache) are based on an internal Intel® Reference design tablet which is not available for purchase: 10” screen with 25x14 resolution, Intel Gen 7 HD Graphics, pre-production graphics driver, 2GB (2x1GB) LPDDR3-1067, 64GB eMMC solid state storage, 38.5 Whr battery. Measured using 3DMark* Ice Storm—a 3D graphics benchmark that measures 3D gaming performance compared to Intel Atom Processor Z2760. Find out more at www.futuremark.com. Individual results will vary. Commercial systems may be available after future Windows updates. Consult your system manufacturer for more details. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to http://www.intel.com/performance.

3 Based on a 30W Hour battery on 19x10 resolution on 10.1” display. Higher resolution will require larger battery. Active use measured as 1080/30 fps local video playback. Battery life may differ based on SKU and SoC performance.

4 No computer system can provide absolute security. Requires an Intel® Identity Protection Technology-enabled system, including an enabled Intel® processor, enabled chipset, firmware, software, and Intel integrated graphics (in some cases) and participating website/service. Intel assumes no liability for lost or stolen data and/or systems or any resulting damages. For more information, visit http://ipt.intel.com/. Consult your system manufacturer and/or software vendor for more information.

Intel, the Intel logo, Atom, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.

Copyright © 2013 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

Mystic Blocks Brings the Magic of Gesture Recognition and Facial Analysis to Desktop Gaming

$
0
0

Case Study: Mystic Blocks Brings the Magic of Gesture Recognition and Facial Analysis to Desktop Gaming

By Erin Wright

Downloads


Case Study Mystic Blocks [PDF 849KB]

Developer Matty Hoban of Liverpool, England, is always looking for innovative ways to integrate his love of mathematics, physics, and software development with next-generation technology. So he welcomed the opportunity to participate in Phase 1 of the Intel® Perceptual Computing Challenge.

The Intel Perceptual Computing Challenge invited coders to push the boundaries of the Intel® Perceptual Computing software development kit (SDK) and Creative* Interactive Gesture Camera, which together offer significant advancements in human–computing interactions, including:

  • Speech recognition
  • Close-range depth tracking (gesture recognition)
  • Facial analysis
  • Augmented reality

Hoban is currently finishing his computing degree at the Open University. He is also the founder of Elefant Games, which develops tools for game developers in addition to the Bomb Breaker app for Windows* desktops and touch screens.

Preparation

After entering the Challenge, Hoban looked to the Perceptual Computing SDK and Creative Interactive Gesture Camera for inspiration. He explains, “I wanted to get a feel for them. I felt it wasn’t enough to take an existing idea and try to make it work with a perceptual camera. Whatever I made, I knew that it had to work best with this camera over all possible control methods.”

Testing the Gesture Camera and Perceptual Computing SDK

Hoban began by testing the capabilities of the Creative Interactive Gesture Camera: “The first thing I did, as anyone would do, was try the sample that comes with it. This lets you see that the camera is working, and it gives back real-time variables of angles for your head and the position of your hands.”

Hoban then ran sample code through the Perceptual Computing SDK. He says, “Capturing hand and head movements is simple. There are multiple ways of utilizing the SDK: You can use call-backs, or you can create the SDK directly to get the data you need.”

Prototyping with Basic Shapes

After getting familiar with the Gesture camera and the SDK, Hoban began manipulating basic shapes using the gesture-recognition abilities of the close-range, depth-tracking usage mode. He says, “Once I looked at the samples and saw that the real-world values for your hands returned well, I started to get an idea for the game.”

He developed a method of creating block-based geometric shapes using three two-dimensional matrices populated with ones and zeroes. Each matrix represents the front, bottom, or side of an individual shape. This method eliminated the need for three-dimensional (3D) software and expedited the process of generating shapes within the game. Figure 1 shows examples of the shape matrices.

Figure 1.Constructing shapes with matrices

With the Gesture camera and shape matrices in place, Hoban added facial analysis to track head position in relation to the visual perspective on the screen—and Mystic Blocks was born.

Developing Mystic Blocks

Mystic Blocks is a magician-themed puzzle game that requires players to use hand motions to turn keys to fit approaching locks, as shown in Figure 2. The keys are a variety of 3D shapes the matrix method described above generates.

Figure 2.Mystic Blocks level 1

“I’ve compared Mystic Blocks to the Hole in the Wall game show, where contestants need to position themselves correctly to fit an approaching hole,” explains Hoban. “Mystic Blocks does the same but with 3D geometry that uses rotation to allow the same shape to fit through many different-shaped holes.”

Players begin by turning the keys with one hand, but as the game progresses, they have to coordinate both hands to move the keys in a 3D space. In addition to mastering hand coordination, players must repeat each sequence from memory on the second try as the locks approach with hidden keyholes. If players want a better view of the approaching locks, they can shift the game’s perspective by moving their heads from side to side. To see Mystic Blocks in action, check out the demonstration video at http://www.youtube.com/watch?v=XUqhcI_4nWo.

Close-range Depth Tracking (Gesture Recognition)

Mystic Blocks combines two usage modes from the Perceptual Computing SDK: close-range depth tracking and facial analysis. Close-range depth tracking recognizes and tracks hand positions and gestures such as those used in Mystic Blocks.

Opportunities and Challenges of Close-range Depth Tracking

Hoban found creative solutions for two challenges of close-range depth tracking: detection boundaries and data filtering.

Detection Boundaries

Mystic Blocks gives players text instructions to hold their hands in front of the camera. Although players are free to determine their own hand positions, Hoban’s usability tests revealed that hand motions are detected most accurately when players hold their palms toward the camera, with fingers slightly bent as if about to turn a knob, as demonstrated in Figure 3.

Figure 3.Mystic Blocks hand position for gesture recognition

Additional usability tests showed that players initially hold their hands too high above the camera. Therefore, a challenge for developers is creating user interfaces that encourage players to keep their hands within the detection boundaries.

Currently, Mystic Blocks meets this challenge with graphics that change from red to green as the camera recognizes players’ hands, as shown in Figure 4.

Figure 4.Mystic Blocks hand recognition alert graphics

“I’d like to add a visual mechanism to let the user know when his or her hand strays out of range as well as some demonstrations of the control system,” notes Hoban. “I think that as the technology progresses, we’ll see standard gestures being used for common situations, and this will make it easier for users to know instinctively what to do.”

Yet, even without these standardized movements, Hoban’s adult testers quickly adapted to the parameters of the gesture-based control system. The only notable control issue arose when a seven-year-old tester had difficulty turning the keys; however, Hoban believes that he can make the game more child friendly by altering variables to allow for a wider variety of hand rotations. He says, “I have noticed improvements in the Perceptual Computing SDK since I developed Mystic Blocks with the beta version, so I am confident that the controls can now be improved significantly.”

Data Filtering

During user testing, Hoban noticed that the hand-recognition function would occasionally become jumpy. He reduced this issue and improved the players’ ability to rotate the keys by filtering the incoming data. Specifically, the game logic ignores values that stray too far out of the established averages.

In the future, Hoban would like to leverage the flexibility of the Perceptual Computing SDK to fine-tune the filters even further. For instance, he wants to enhance the game’s ability to distinguish between left and right hands and increase gesture recognition performance in bright, outdoor light.

Head Tracking

The Perceptual Computing SDK facial analysis usage mode can track head movements like those Mystic Blocks players use to adjust their visual perspectives. Hoban says, “The head tracking was simple to add. Without it, I would need to offset the view by a fixed distance, because the player’s view is directly behind the shape, which can block the oncoming keyhole.”

Mystic Blocks’ head tracking is primarily focused on side-to-side head movements, although up and down movements can also affect the onscreen view to a lesser extent. This lets players find their most comfortable position and add to their immersion in the game. “If you’re looking directly towards the camera, you’ll have the standard view of the game,” explains Hoban. “But if you want to look around the corner or look to the side of the blocks to see what’s coming, you just make a slight head movement. The camera recognizes these movements and the way you see the game changes.”

Sampling Rate

The Creative Interactive Gesture Camera provides Hoban with a sampling rate of 30 fps. The Mystic Blocks application, which runs at 60 fps, can process gesture recognition and head tracking input as it becomes available. Hoban states, “The Gesture Camera is responsive, and I am quite impressed with how quickly it picks up the inputs and processes the images.”

Third-party Technology Integration

Mystic Blocks incorporates The Game Creators (TGC) App Game Kit with Tier 2 C++ library for rendering and the NVIDIA PhysX* SDK for collision detection and physics. Hoban also used several third-party development tools, including Microsoft Visual Studio* 2010, TGC 3D World Studio, and Adobe Photoshop*.

These third-party resources integrated seamlessly with the Intel® Perceptual Computing technology. Hoban reports, “You just launch the camera and fire up Visual Studio. Then, you can call the library from the SDK and work with some example code. This will give you immediate results and information from the camera.”

Figure 5 outlines the basic architecture behind Mystic Blocks in relation to the Gesture Camera.

Figure 5.Mystic Blocks architecture diagram

The Ultrabook™ Experience

Mystic Blocks was developed and tested on an Ultrabook™ device with an Intel® Core™ i7-3367U CPU, 4 GB of RAM, 64-bit operating system, and limited touch support with five touch points. Hoban comments, “There were no problems with power or graphics. It handled the camera and the game, and I never came up against any issues with the Ultrabook.”

The Future of Perceptual Computing

Hoban believes that perceptual computing technologies will be welcomed by gamers and nongamers alike: “I don’t see it taking over traditional keyboards, but it will fit comfortably alongside established controls within most apps—probably supporting the most common scenarios, such as turning the page or going to the next screen with a flick of your hand. Devices will also be able to recognize your face, conveniently knowing your settings.”

According to Hoban, gesture recognition is a perfect control system for motion-based games like Mystic Blocks; however, game developers will need to strike a balance between perceptual computing and traditional keyboard control methods in complex games with numerous options. “If you take your hands away from the camera to use the keyboard, you might lose focus on what you’re doing,” he comments. Instead, he advises developers to enrich complex games with gesture recognition for specific actions, such as casting spells or using a weapon.

Facial analysis and voice recognition offer additional opportunities to expand and personalize gaming control systems. For example, Hoban predicts that facial analysis will be used to automatically log in multiple players at once and begin play exactly where that group of players left off, while voice recognition will be used alongside keyboards and gesture recognition to perform common tasks, such as increasing power, without interrupting game play.

“I would like to add voice recognition to Mystic Blocks so that you could say ‘faster’ or ‘slower’ to speed up or slow down the game, because right now you can’t press a button without losing hand focus in the camera,” notes Hoban.

And the Winner Is...

Matty Hoban’s groundbreaking work with Mystic Blocks earned him a grand prize award in the Intel Perceptual Computing Challenge Phase 1. He is currently exploring opportunities to develop Mystic Blocks into a full-scale desktop game, while not ruling out the possibility of releasing the game on Apple iOS* and Google Android* devices. “Mystic Blocks is really suited to the camera and gesture inputs,” he says. “It will transfer to other devices, but if I develop it further, it will primarily be for perceptual computing on the PC.”

In the meantime, Hoban has entered the Intel Perceptual Computing Challenge Phase 2 with a new concept for a top-down racing game that will allow players to steer vehicles with one hand while accelerating and braking with the other hand.

Summary

Matty Hoban’s puzzle game Mystic Blocks won a grand prize in the Intel Perceptual Computing Challenge Phase 1. Mystic Blocks gives players the unique opportunity to move shapes in a 3D space using only hand gestures. Players also have the ability to control the game’s visual perspective by moving their heads from side to side. During development, Hoban created his own innovative method of filtering data through the Perceptual Computing SDK and Creative Interactive Gesture Camera. He also gained valuable insight into the process of helping players adapt to gesture recognition and facial analysis.

For More Information

About the Author

Erin Wright, M.A.S., is an independent technology and business writer in Chicago, Illinois.

Intel, the Intel logo, Ultrabook, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

ooVoo Intel Enabling – HD Video Conferencing

$
0
0

Downloads


Download ooVoo Intel Enabling – HD Video Conferencing [PDF 754KB]

Throughout Q1/Q2 of 2013 Intel and ooVoo collaborated to enable multiple hardware accelerated video conferencing use cases. These include standard person-to-person video conferencing at 720p, multi-party video conferencing (up to 12 participants), and media sharing. To enable this, ooVoo collaborated with Intel to make the most of the wide range of performance options available on systems running both Intel® Atom™ and 4th generation Intel® Core™ processors.

Materials presented within this paper target Microsoft Windows* 8.x operating systems. Key resources / technologies leveraged during the optimization process include:

To provide a complete picture of the optimization process, this paper discusses the overall analysis and testing approach, specific optimizations made, and illustrates before and after quality improvements.

Overall, focus was placed on ensuring video quality first and foremost. No significant emphasis was placed on power-efficiency optimization at this time. The quality improvements enabled by leveraging Intel hardware offloading capabilities are impressive and can be enjoyed in either person-to-person conferencing or in multi-party conferences for primary speakers.

Primary Challenge


Eliminating sporadic corruption occurring during network spikes and on bandwidth-limited connections was by far the biggest challenge faced during the optimization process. Initially, all HD calls experienced persistent constant corruption.

Analysis & Testing Approach


Before pursuing optimization efforts, a test plan and analysis approach was defined to ensure repeatability of results. Several key learnings were observed during our initial testing cycles. These included:

  • Establish performance baseline– Before beginning to optimize our application, we made sure we had eliminated network inconsistencies, unnecessary traffic / noise, adequate lighting, and adequate cameras. With the major variables under control we were able to confirm that our testing approach and results were repeatable. From this baseline, we could begin the optimization process.
  • Establish minimum network bandwidth (BW) requirements– Poor quality networks lead to poor quality video conferencing. A minimum of 1-1.5 MBits available for uplink / downlink was required to realize a full HD conference.
  • Camera quality is important– Poor quality cameras lead to poor quality video conferencing. As camera quality degrades, the amount of noise, blocking, and other artifacts increases. This also tends to increase the overall BW required to drive the call as well.
  • Define, test, and validate the network– To be certain you are not reporting issues created by BW issues present on your network, testing is required to determine the amount of packet loss, jitter, etc. present on the network before testing can begin.

The following data was collected during each testing cycle to enable triage / investigation of corruption issues affecting video quality. Initial data collected at the beginning of the optimization process showed that even reliable Ethernet connections experience corruption problems despite very low packet loss and jitter on a ~17 MBit connection.

TestIntvlTx SizeBWJitterLost/Total
Datagrams
Out of
order
enc fpsdec fpsQuality
Ethernet
VGA
10
sec
20.2
MBytes
17.0
Mbits/sec
1.04
2 ms
0.00%11515Some corruption
and coarse image
10
sec
20.2
MBytes
16.2
Mbits/sec
1.01
6 ms
0.00%11515
Ethernet
720p
10
sec
20.2
MBytes
16.2
Mbits/sec
1.01
6 ms
0.00%115.4114.47Frequent
corruption
10
sec
20.2
MBytes
16.2
Mbits/sec
1.01
6 ms
0.00%115.2215.01


Key Optimizations


The three major optimization phases are: phase 1 focused on quality of service (QoS), phase 2 examined the user interface behavior, and phase 3 took a hard look at the rendering pipeline itself.

  • Phase 1 - QoS approach optimization – Resulting in a switch from 1% to 5% acceptable packet loss.
  • Phase 2 – User interface optimization – Resulting in I/O pattern set to output to system memory versus video memory due to the use of CPU-centric rendering APIs and reduction in GDI workload.
  • Phase 3 - Rendering pipeline optimization – Resulting in the elimination of per pixel copies, use of Intel IPP for memory copies where possible, and update to Intel IPP v7.1.

Phase 1 – QoS Approach Optimization
Quality of service algorithms seek to ensure that a consistent level of service is provided to end users. During our initial evaluation of the software, it was unclear if the QoS solution within ooVoo was too aggressive or if our network environment was too unstable. Initial measurements of network integrity indicated that it was unlikely that the network conditions were causing our issues. Jitter was measured at around 1.0-1.8 ms, packet loss was at or near 0%, and very few (if any packets) were being received out of order. All in all this indicated a potential issue on the QoS side of the application.

To find the root cause of the issue, it was necessary to perform a low-level analysis on the actual bitstream data being sent / received by the ooVoo application. Our configuration for this process was as follows:

Direct analysis of the bitstream being encoded on the transmit side and the bitstream reconstructed on the receiver side indicated that frames were clearly being lost somewhere. Further analysis of the encode bitstream showed that all frames could be accounted for on the transmit side of the call; however, the receive side of the call was not seeing the entire bitstream encoded on the transmit side.

As can be seen from the diagram above, the receiver (decoder) stream is missing frames throughout the entire call. After working closely with the ooVoo development team, it was found that relaxing the QoS to accommodate up to 5% packet loss improved things significantly during point-to-point 720p video calls.

Phase 2 – User Interface Optimizations
Today, graphics and media developers have a wide variety of APIs to select from to meet engineering needs. Some APIs offer richer feature sets targeting newer hardware while others offer backwards compatibility. Since backwards compatibility was a key requirement for the ooVoo application, legacy APIs developed by Microsoft Corporation such as GDI and DirectShow* are necessary.

The following simplified pipeline illustrates the key area (“Draw UI” in green) where optimizations took place during this phase.

Before diving in to the details, a quick word regarding video and system memory is in order. In simple terms, video memory is typically accessible to the GPU while system memory is typically accessible to the CPU. Memory can be copied between video / system memory; however, this comes at a significant performance cost. When working with graphics APIs, it is important to know whether the API you are using is CPU-centric. If it is, then it is critical to set the MSDK I/O PATTERN to output to system memory. Failure to do this when using a CPU-centric rendering API may lead to very poor performance. In cases where APIs such as GDI are used to operate on the surface data provided by the MSDK, operations that require surface locking will (in particular) be the most costly.

In the case of the ooVoo client application, it was observed that fullscreen rendering required significantly more processing power than when running in windowed mode. This puts us squarely in the case of needing to account for a CPU-centric API in our rendering pipeline.

A detailed look at the overall workload when in fullscreen mode illustrates the following GDI activity (see yellow). Measurements below were made on an Ivy Bridge platform with a total of 4 cores yielding 400% total processing power.

Continuing the investigation, it became clear that there was a significant difference in how the ooVoo application handled window versus fullscreen display modes. Note GDI workload virtually disappears in window mode.

The Intel Vtune analyzer was used to identify the area of code where the GDI workload was being introduced. After discussing the issue with the ooVoo team, it became clear that this was unexpected behavior, and the ooVoo team found that the application was using GDI too frequently during fullscreen rendering. The solution was to limit the number of GDI calls made during each frame when in fullscreen mode. Despite the simple nature of this change, significant improvements were observed across the board:

GDI workload reduction impact and observations:

  • Limited rendering via legacy APIs such as GDI+ is possible for video conferencing applications if resources are already available in system memory and very limited calls are made to GDI+ during each frame.
  • Reduction of GDI+ call frequency within ooVoo application virtually eliminated all GDI overhead.
  • Broad overall application CPU utilization for ooVoo went down by ~4-5%. Within the app the percent of time spent on GDI+ work is down from 10% to 0.2.
  • Overall workload associated with the ooVoo application is more organized and predictable with less CPU spikes due to less surface locking by GDI.
  • System wide reduction in CPU of ~20%.

The following diagram illustrates the ooVoo application workload and related GDI effort after our optimizations:

Final measurements post optimization follow:

MetricPrevious BuildLatest BuildDelta / Improvement
System CPU Peaks~200%~175%Reduced ~25%
ooVoo.exe CPU Peaks~40%~37%Reduced 3%
GDI+ Workload10% of ooVoo.exe0.12% of ooVoo.exeReduced 10%
Total System CPU159% of 400%137% of 400%Reduced ~22%
ooVoo Total CPU114% of 400%110% of 400%Reduced ~4-5%


Phase 3 – Rendering Pipeline Optimizations
Our final step in the optimization process was to take a hard look at the backend rendering pipeline for any un-optimized copies or pixel format conversions that might be affecting performance. Three key things to watch out for include:

  • Per pixel copies – A copy operation executed serially for each pixel. For this type of operation it is always best to leverage Intel IPP. Our Intel IPP package comes with copy operations optimized for Intel HW.
  • Copies across video / system memory boundaries – Instead of copying MSDK frame data from video to system memory yourself, it is more effective to allow the MSDK to steam to system memory for you.
  • Fourcc conversions – Fourcc color conversions are always expensive. If possible, try to get your data in the format you need and stay there. If converting between YUV / RGB colorspace, you can use either Intel IPP or pixel shaders to expedite.

Early on in the process of profiling the ooVoo application, it was clear that memory copies were affecting performance; however, it was not clear what opportunities existed to address the issue. The ooVoo team performed a detailed code review and found cases where Intel IPP copy operations were not being used, places where per pixel copies were used, and ultimately upgraded to Intel IPP v7.1 to benefit from the latest updates.

The results were impressive, giving us our first look at video conferencing at 720 on both Intel Core and Intel Atom platforms. The following before/after shots illustrate the improvements.

Point-to-Point
Note the elimination of blocky corruption in the facial area:

Configuration: Point to point, 720p, 15 fps, 1-1.5 MBits/sec, IVB:IVB, 4G network

Multi-Party
Note the level of detail enabled for primary speaker during multi-party conference.

Configuration: Muti-Party via ooVoo Server, 4 callers + YouTube, 15 fps, IVB


Intel, the Intel logo, Atom, Core, and VTune are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Perceptual Computing: Practical Hands-Free Gaming

$
0
0

Download Article

Perceptual Computing: Practical Hands-Free Gaming [PDF 772KB]

1. Introduction

The concept of a hands-free game is not new, and many unsuccessful attempts have been made to abandon peripheral controllers to rely solely on the human body as an input device. Most of these experiments came from the console world, and only in the last few years have we seen controllerless systems gain significant traction.


Figure 1:The Nintendo U-Force – State of the Art Hands Free Gaming in 1989

Early attempts at hands-free gaming were usually highly specialized peripherals that only applied to a handful of compatible games. The Sega Genesis peripheral called Sega Activator was a good example of this, being an octagonal ring placed on the floor with the player standing in its center. Despite the advertised ninja-style motions, the device controls simply mapped to 16 buttons and produced restrictive game play causing its silent demise.


Figure 2:The Sega Activator – an early infra-red floor ring for the Genesis console

More recent attempts such as the Sony Eye Toy* and Xbox* Live Vision gained further traction and captured the public’s imagination with the promise of hands-free control, but they failed to gain support from the developer community and only a few dozen hands-free games were produced.


Figure 3:The Sony Eye Toy* – an early current generation attempt at controllerless gaming

As you can see, hands-free technology has been prevalent in the console world for many years, and only recently have we seen widespread success in such devices thanks to the Xbox Kinect*. With the introduction of Perceptual Computing, hands-free game control is now possible on the PC and with sufficient accuracy and performance to make the experience truly immersive.

This article provides developers with an overview of the topic, design considerations, and a case study of how one such game was developed. I’m assuming you are familiar with theCreative* Interactive Gesture camera and the Intel® Perceptual Computing SDK. Although the code samples are given in C++, the concepts explained are applicable to Unity* and C# developers as well. It is also advantageous if you have a working knowledge of extracting and using the depth data generated by the Gesture camera.

2. Why Is This Important

It is often said by old-school developers that there are only about six games in the world, endlessly recreated with new graphics and sound, twists and turns in the story, and of course, improvements in the technology. When you start to break down any game into its component parts, you start to suspect this cynical view is frighteningly accurate. The birth and evolution of the platform genre was in no small way influenced by the fact the player had a joystick with only four compass directions and a fire button.

Assuming then that the type of controller used influences the type of games created, imagine what would happen if you were given a completely new kind of controller, one that intuitively knew what you were doing, as you were doing it. Some amazing new games to play would be created that would open the doors to incredible new gaming experiences.

3. The Question of Determinism

One of the biggest challenges facing hands-free gaming and indeed Perceptual Computing in general is the ability for your application or game to determine what the user intends to do, 100% of the time. A keyboard where the A key failed to respond 1% of the time, or a mouse that selects the right button randomly every fifteen minutes would be instantly dismissed as faulty and replaced. Thanks to our human interface devices, we now expect 100% compliance between what we intend and what happens on screen.

Perceptual Computing can provide no less. Given the almost infinite combination of input pouring in through the data streams, we developers have our work cut out! A mouse has a handful of dedicated input signals, controllers have a few times that, and keyboards more so. A Gesture Camera would feed in over 25,000 times more data than any traditional peripheral controller, and there is no simple diagram to tell you what any of it actually does.

As tantalizing as it is to create an input system that can scan the player and extract all manner of signals from them, the question is can such a signal be detected 100% of the time? If it’s 99%, you must throw it out or be prepared for a lot of angry users!

4. Overview of the Body Mass Tracker technique

One technique that can be heralded as 100% deterministic is the Body Mass Tracker technique, which was featured in one of my previous articles at http://software.intel.com/en-us/articles/perceptual-computing-depth-data-techniques.

By using the depth value as a weight against cumulatively adding together the coordinates of each depth pixel, you can arrive at a single coordinate that indicates generally at which side of the camera the user is located. That is, when the user leans to the left, your application can detect this and provide a suitable coordinate to track them. When they lean to the right, the application will continue to follow them. When the user leans forward, this too is tracked. Given that the sample taken is absolute, individual details like hand movements, background objects, and other distractions are absorbed into a “whole view average.”

The code is divided into two simple steps. The first will average all the value depth pixel coordinates to produce a single coordinate, and the second will draw the dot to the camera picture image render so we can see if the technique works. When run, you will see the dot center itself around the activity of the depth data.

// find body mass center
int iAvX = 0;
int iAvY = 0;
int iAvCount = 0;
for (int y=0;y<(int)480;y++) 
{
 for (int x=0;x<(int)640;x++) 
 {
  int dx=g_biguvmap[(y*640+x)*2+0];
  int dy=g_biguvmap[(y*640+x)*2+1];
  pxcU16 depthvalue = ((pxcU16*)ddepth.planes[0])[dy*320+dx];
  if ( depthvalue<65535/5 ) 
  {
   iAvX = iAvX + x;
   iAvY = iAvY + y;
   iAvCount++;
  }
 }
}
iAvX = iAvX / iAvCount;
iAvY = iAvY / iAvCount;

// draw body mass dot
for ( int drx=-8; drx<=8; drx++ )
 for ( int dry=-8; dry<=8; dry++ )
  ((pxcU32*)dcolor.planes[0])[(iAvY+dry)*640+(iAvX+drx)]=0xFFFFFFFF;

In Figure 4 below, notice the white dot has been rendered to represent the body mass coordinate. As the user leans right, the dot respects the general distribution by smoothly floating right, when he leans left, the dot smoothly floats to the left, all in real-time.


Figure 4:The white dot represents the average position of all relevant depth pixels

As you can see, the technique itself is relatively simple, but the critical point is that the location of this coordinate will be predictable under all adverse conditions that may face your game when in the field. People walking in the background, what you’re wearing, and any subtle factors being returned from the depth data stream are screened out. Through this real-time distillation process, what gets produced is pure gold, a single piece of input that is 100% deterministic.

5. The Importance of Calibration

There’s no way around it—your game or application has to have a calibration step. A traditional controller is hardwired for calibration and allows the user to avoid the mundane task of describing to the software which button means up, which one means down, and so on. Perceptual Computing calibration is not quite as radical as defining every function of input control to your game, but its healthy to assume this is the case.

This step is more common sense than complicated and can be broken down into a few simple reminders that will help your game help its players.

Camera Tilt– The Gesture camera ships with a vertical tilt mechanism that allows the operator to angle the camera to face up or down by a significant degree. Its lowest setting can even monitor the keyboard instead of the person sitting at the desk. It is vital that your game does not assume the user has the camera in the perfect tilt position. It may have been knocked out of alignment or recently installed. Alternatively, your user may be particularly tall or short, in which case they need to adjust the camera so they are completely in the frame.

Background Depth Noise– If you are familiar with the techniques of filtering the depth data stream coming from the Gesture camera, you will know the problems with background objects interfering with your game. This is especially true at exhibits and conventions where people will be watching over the shoulder of the main player. Your game must be able to block this background noise by specifying a depth level beyond which the depth objects are ignored. As the person will be playing in an unknown environment, this depth level must be adjustable during the calibration step, and ideally using a hands-free mechanism.

For a true hands-free game, it’s best not to resort to using traditional control methods to “setup” your game, as this defeats the object of a completely hands-free experience. It might seem paradoxical to use hands-free controls to calibrate misaligned hands-free controls, but on-screen prompts and visual hints direct from the depth/color camera should be enough to orient the user. Ideally the only hands-on activity when using your game is tilting the camera when you play the game for the first time.

6. The UX and UI of Hands-Free Gaming

Perceptual Computing is redefining the application user experience, dropping traditional user interfaces in favor of completely new paradigms to bridge the gap between human and machine. Buttons, keys, and menus are all placeholder concepts, constructed to allow humans to communicate with computers.

When designing a new UI specific for hands-free gaming, you must begin by throwing away these placeholders and start with a blank canvas. It would be tempting to study the new sensor technologies and devise new concepts of control to exploit them, but we would then make the same mistakes as our predecessors.

You must begin by imagining a user experience that is as close to the human conversation as possible, with no constraints imposed by technology. Each moment you degrade the experience for technical reasons, you’ll find your solution degenerate into a control system reminiscent of traditional methods. For example, using your hand to control four compass directions might seem cool, but it’s just a novel transliteration of a joystick, which in itself was a crude method of communicating the desires of the human to the device. In the real world, you simply walked forward, or in the case of the third person, speak instructions sufficiently detailed to achieve the objective.

As developers, we encounter technical constraints all the time, and it’s tempting to ease the UX design process by working within these constraints. My suggestion is that your designs begin with blue-sky thinking, and meet any technical constraints as challenges. As you know, the half-life of a problem correlates to the hiding powers of the solution, and under the harsh gaze of developers, no problem survives for very long.

So how do we translate this philosophy into practical steps and create great software? A good starting point is to imagine something your seated self can do and associate that with an intention in the software.

The Blue Sky Thought

Imagine having a conversation with an in-game character, buying provisions, and haggling with the store keeper. Imagine the store keeper taking note of which items you are looking at and starting his best pitch to gain a sale. Imagine pointing at an item on a shelf and saying “how much is that?” and the game unfolding into a natural conversation. We have never seen this in any computer game and yet it’s almost within reach, barring a few technical challenges.

It was in the spirit of this blue-sky process that I contemplated what it might be like to swim like a fish, no arms or legs, just fins, drag factors, nose direction, and a thrashing tail. Similar to the feeling a scuba diver has, fishy me could slice through the water, every twist of my limbs causing a subtle change in direction. This became the premise of my control system for a hands-free game you will learn about later in this article.

Player Orientation

When testing your game, much like the training mode of a console game, you must orient the player in how to play the game from the very first moment. With key, touch, and controller games, you rightly assume the majority of your audience will have a basic toolbox of knowledge to figure out how to play your game. Compass directions, on screen menus, and action buttons are all common instruments we use to navigate any game. Hands-free gaming throws most of that away, which means in addition to creating a new paradigm for game control we also need to explain and nurture the player through these new controls.

A great way to do this is to build it into the above calibration step, so that the act of setting up the Gesture camera and learning the player’s seated position is also the process of demonstrating how the game controls work.

Usability Testing

Unlike most usability tests, when testing a hands-free game, additional factors come into play that would not normally be an issue on controller-based games. For example, even though pressing the left-pad left would be universal no matter who is playing your game, turning your head left might not have the same clear-cut response. That is not to say you have breached the first rule of 100% determinism, but that the instructions you gave and the response of the human player may not tally up perfectly. Only by testing your game with a good cross section of users will you be able to determine whether your calibration and in-game instructions are easy to interpret and repeat without outside assistance.

The closest equivalent to traditional considerations is to realize that a human hand cannot press all four action buttons at once in a fast-paced action game, due to the fact you only have one thumb available and four buttons. Perhaps after many months of development, you managed such a feat and it remained in the game, but testing would soon chase out such a requirement. This applies more so to hands-free game testing, where the capabilities between humans may differ wildly and any gesture or action you ask them to perform should be as natural and comfortable as possible.

One example of this is a hands-free game that required holding your hand out to aim fireballs at your foe. A great game and lots of fun, but it was discovered when shown to conference attendees that after about 4 minutes their arm would be burning with the strain of playing. To get a sense of what this felt like, hold a bag of sugar at arm’s length for 4 minutes or so.

It is inevitable that we’ll see a fair number of hands-free games that push the limit of human capability and others that teasingly dance on the edge of it. Ultimately, the player wants to enjoy the game more than they want an upper body workout, so aim for ease of use and comfort and you’ll win many fans in the hands-free gaming space.

7. Creating a Hands-Free Game – A Walkthrough

Reading the theory is all well and good, but I find the most enlightening way to engage with the material is when I see it in action. What better way to establish the credibility of this article than to show you a finished game inspired by the lessons preached here.


Figure 5:Title screen from the game DODGE – a completely hands-free game experiment

The basic goal when writing DODGE was to investigate whether a completely hands-free game could be created that required no training and was truly hands-free. By this definition, an application that once started from the OS would require no keyboard, mouse, or touch and was powered entirely by hands-free technology.

Having established the Body Mass Tracker as my input method of choice, I began writing a simple game based on the necessity to dodge various objects being thrown in your general direction. However, due to the lack of an artist, I had to resort to more primitive techniques for content generation and created a simple rolling terrain that incrementally added stalactites and stalagmites as the game progressed.

As it happened, the “cave” theme worked much better visually than any “objects being thrown at me” game I could have created in the same timeframe. So with my content in place, I proceeded to the Perceptual Computing stage.

Borrowed from previous Perceptual Computing prototypes, I created a small module that plugged into the Dark Basic Professional programming language which would feed me the body mass tracker coordinate to my game. Within the space of an hour I was now able to control my dodging behavior without touching the keyboard or mouse.

What I did not anticipate until it was coded and running was the nuance and subtlety you get from BMT (Body Mass Tracker), in that every slight turn of the head, lean of the body, twist in the shoulder would produce an ever so slight course correction by the pilot in the game. It was like having a thousand directions to describe north! It was this single realization that led me to conclude that Perceptual Gaming is not a replacement to peripheral controllers, but its successor. No controller in the world, no matter how clever, allows you to control game space using your whole body.

Imagine you are Superman for the day, and what it might feel like to fly—to twist and turn, and duck and roll at the speed of thought. As I played my little prototype, this was what I glimpsed, a vision of the future where gaming was as fast as thought.

Now to clarify, I certainly don’t expect you to accept these words at face value, as the revelation only came to me immediately after playing this experience for myself. What I ask is that if you find yourself with a few weekends to spare, try a few experiments in this area and see if you can bring the idea of “games as quick as thought” closer to reality.

At the time of writing, the game DODGE is still in development, but will be made available through various distribution points and announced through my twitter and blog feeds.

8. Tricks and Tips

Do’s

  • Test your game thoroughly with non-gamers. They are the best test subjects for a hands-free game as they will approach the challenge from a completely humanistic perspective.
  • Keep your input methods simple and intuitive so that game input is predictable and reliable.
  • Provide basic camera information through your game such as whether the camera is connected and providing the required depth data. No amount of calibration in the world will help if the user has not plugged the camera in.

Don’ts

  • Do not interpret data values coming from the camera as absolute values. Treat all data as relative to the initial calibration step so that each player in their own unique environment enjoys the same experience. If you developed and tested your game in a dark room with the subject very close to the camera, imagine your player in a bright room sitting far from the keyboard.
  • Do not assume your user knows how the calibration step is performed and supplement these early requests from the user with on-screen text, voice-over, or animation.
  • Never implement an input method that requires the user to have substantial training as this will frustrate your audience and even create opportunities for non-deterministic results.

9. Final Thoughts

You may have heard the expression “standing on the shoulders of giants,” and the idea that we use the hard won knowledge of the past to act as a foundation for our future innovations. The console world had over 20 years of trial and error before they mastered the hands-free controller for their audience, and as developers we must learn from their successes and failures. Simply offering a hands-free option is not enough, as we must guard against creating a solution that becomes the object of novelty ten years from now. We must create what the gamer wants, not what the technology can do, and when we achieve that we’ll have made a lasting contribution to the evolution of hands-free gaming.

About The Author

When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Educational Sample Code for Windows* 8

$
0
0

Download


Download Sample Code [ZIP 379KB]

Abstract


This document is intended to provide developers with an accelerated path in the development of prototype applications. The product concepts and visual designs should also provide inspiration for ideas for similar applications.

The accompanying sample code focuses on demonstrating the following Windows* 8 features within an education application.

  • Semantic Zoom
  • App Bar
  • Swipe Select
  • Share

Semantic Zoom


Semantic Zoom is a touch-optimized technique used by Windows* Store apps for presenting and navigating large sets of related data or content within a single view (such as a photo album, app list, or address book).

Semantic Zoom uses two distinct modes of classification (or zoom levels) for organizing and presenting the content: one low-level (or zoomed in) mode that is typically used to display items in a flat, all-up structure and another, high-level (or zoomed out) mode that displays items in groups and enables a user to quickly navigate and browse through the content.

The Semantic Zoom interaction is performed with the pinch and stretch gestures (moving the fingers farther apart zooms in and moving them closer together zooms out), or by holding the Ctrl key down while scrolling the mouse scroll wheel, or by holding the Ctrl key down (with the Shift key, if no numeric keypad is available) and pressing the plus (+) or minus (-) key.

Creating a custom SemantiZoom control

To provide this zooming functionality, the SemanticZoom control uses two other controls: one to provide the zoomed-in view and one to provide the zoomed-out view. These controls can be any two controls that implement the IZoomableView interface.

Note: ListView is the only Windows Library for JavaScript* control that implements this interface.

WinJS.UI.IZoomableView interface: Supports Semantic Zoom functionality by exposing a control as either the zoomed-in or the zoomedout view of the Semantic Zoom control.

To implement IZoomableView interface we need to implement the following methods:

beginZoom: Initiates Semantic Zoom on the custom control.

configureForZoom: Initializes the semantic zoom state for the custom control. This method takes the following four parameters:

  • sZoomedOut
    • Type: Variant
    • True if this is the zoomed-out view; otherwise false.
  • isCurrentView
    • Type: Variant
    • True if this is the current view; otherwise false.
  • triggerZoom
    • Type: Variant
    • The function that manages semantic zoom behavior. Triggers a zoom in or zoom out if the control is the visible control.
  • prefetchedPages
    • Type: Variant
    • The number of pages of content to pre-fetch for zooming.
    • This value is dependent on the size of the semantic zoom container. More content can be displayed based on the zoom factor and the size of the container.

endZoom: Terminates Semantic Zoom on the zoomed in or zoomed out child of the custom control. This method takes one parameter

  • isCurrentView
    • Type: Variant
    • True if the control is the visible control; otherwise false.

getCurrentItem: Retrieves the current item of the zoomed-in or zoomed-out child of the custom control.

getPanAxis: Retrieves the panning axis of the zoomed-in or zoomed-out child of the custom control.

handlePointer: Manages pointer input for the custom control. This method takes one parameter:

  • pointerId
    • Type: Variant
    • The ID of the pointer

positionItem: Positions the specified item within the viewport of the child control when panning or zooming begins. This method takes two parameters:

  • item
    • Type: Variant
    • The object to position within the viewport of the child control.
    • item can be a number, a string, or an object with any number of properties.
  • position
    • Type: Variant
    • An object that contains the position data of the item relative to the child control.

position must be an object with four number properties: left, top, width, and height.
These values specify a rectangle that is typically the bounding box of the current item, though the details are up to the control. The units of the position must be in pixels. And the coordinates must be relative to the top-left of the control viewport (which should occupy the same area as the Semantic Zoom viewport), except when in RTL mode. In RTL mode, return coordinates relative to the top-right off the control viewport.
The rectangle is transformed from the coordinate system of one control to another.

setCurrentItem: Selects the item closest to the specified screen coordinates. This method takes two parameters:
x
Type: Variant

The x-coordinate in the device-independent pixels (DIPs) relative to the upper-left corner of the SemanticZoom viewport.
y
Type: Variant
The y-coordinate in the DIPs relative to the upper-left corner of the SemanticZoom viewport.

For this particular sample code we are not using ListView; instead, we are creating our own custom semantic zoom control. We start with implementing IZoomableView interface.

We start by describing the scenario, i.e., exactly what is going to be implemented. For example: We are going to be creating a control that has two states for the zoomedIn and zoomedOut views. The control shows a historic timeline from year 1400 to 1550. In the zoomedOut view, the timeline is shown with data sparsely populated. When the user zooms in with the appropriate gesture, the timeline is expanded with more detail.

To do this we implement the ZoomableView class that implements the IZoomableView interface. The customSemanticZoom.js file looks like this:

// implementing Izoomable interface
var ZoomableView = WinJS.Class.define(function (timeline) {
    // Constructor
    this._timeline = timeline;
}, {
    // Public methods
    getPanAxis: function () {
        return this._timeline._getPanAxis();
    },
    configureForZoom: function (isZoomedOut, isCurrentView, triggerZoom, prefetchedPages) {
        this._timeline._configureForZoom(isZoomedOut, isCurrentView, triggerZoom, prefetchedPages);
    },
    setCurrentItem: function (x, y) {
        this._timeline._setCurrentItem(x, y);
    },
    getCurrentItem: function () {
        return this._timeline._getCurrentItem();
    },
    beginZoom: function () {
        this._timeline._beginZoom();
    },
    positionItem: function (/*@override*/item, position) {
       return this._timeline._positionItem(item, position);
    },
    endZoom: function (isCurrentView) {
        this._timeline._endZoom(isCurrentView);
    },
    handlePointer: function (pointerId) {
        this._timeline._handlePointer(pointerId);
    }
});

In the code above we defined a class called
ZoomableView that implements the IZoomableView interface.

Now we define the actual control under the namespace CustomControls. In method WinJS.Class.define we pass the following parameters

Constructor method that creates the zoomedIn and zoomedOut views for semantic zoom control
Instance members that are a set of properties and methods made available on the type.
In the constructor method we create the element _viewport that will contain the semantic zoom controls. Inside _viewport we create another element, _canvas, which will display the zoomedIn or zoomedOut control elements inside it. Here we use a variable called _ initialView of type boolean to indicate whether the current view is zoomedIn or zoomedOut. This value is set in the html where we add the semantic zoom control to the html body. For the zoomedIn view, the value of _initialView will be true, and for the zoomedOut view it will be false.

In this sample we are using baked-in values of images and their positioning for both zoomedIn and zoomedOut views. For the zoomedIn view we are using zoomedInPointsArray that contains the names of the images that need to be positioned at different points on the history timeline scale. itemHeightArray and itemPositionArray contain the heights and pixel positions of the images, respectively. Once the item is created, we add the click event handler to the item so that when users click on any point on the history timeline, it will navigate to the detail page. Once the item is created, we append it to _canvas.

We use the same approach to create the zoomedOut view as well. In the zoomedOut view for click event, we invoke the _triggerZoom() method in order to move into zoomedIn view.

Now we define the methods that we implemented in the IZoomableView interface. But before that, we define a property that returns the ZoomableView instance. Continuing in the customSemanticZoom.js file:

// Define custom control for semantic zoom
WinJS.Namespace.define("CustomControls", {
    Timeline: WinJS.Class.define(function (element, options) {
        this._element = element;
        if (options) {
            if (typeof options.initialView === "boolean") {
                this._initialView = options.initialView;
            }
        }
        this._viewport = document.createElement("div");
        this._viewport.className = "viewportStyle";
        this._element.appendChild(this._viewport);
        this._canvas = document.createElement("div");
        this._canvas.className = "canvasStyle";
        this._viewport.appendChild(this._canvas);
        var viewportWidth = this._element.offsetWidth;
        var viewportHeight = this._element.offsetHeight;
        var that = this;

        // If current view is initial view then create ZoomedIn view
        if (this._initialView) {
            this._element.className = "timeline-zoomed-in";
            var zoomedInPointsArray = ["timeline-1.png", "timeline-2.png", "timeline-3.png", "timeline-4.png",     "timeline-5.png"]
            var itemHeightArray = [537, 800, 537, 777, 833];
            var itemPositionArray = [470, 220, -110, -95, -200];

            // Create items for the zoomed in view
            for (var i = 0; i <= zoomedInPointsArray.length - 1; i++) {
                var item = document.createElement("div");
                item.className = "zoomedInItem";
                item.style.backgroundImage = "url(/images/timeline/" + zoomedInPointsArray[i] + ")";
                item.style.marginLeft = itemPositionArray[i] + "px";
                item.style.height = itemHeightArray[i] + "px";

                // Add click event handler to navigate from page
                item.addEventListener("click", function () {
                    WinJS.Navigation.navigate("/pages/detail/detail.html");
                }, false);
                this._canvas.appendChild(item);
            }

            // Create bottom timeline scale
            var timelineScale = document.createElement("div");
            timelineScale.className = "scaleStyle";
            this._canvas.appendChild(timelineScale);
    }

    // Create zoomedOut view
    else {
        this._element.className = "timeline-zoomed-out";
        var zoomedOutPointsArray = ["sezo-1.png", "sezo-2.png", "sezo-3.png", "sezo-4.png", "sezo-5.png"]
        var itemWidthArray = [280, 280, 280, 267, 226];

        // Create items for the zoomed out view
        for (var i = 0; i <= zoomedOutPointsArray.length - 1; i++) {
            var item = document.createElement("div");
            item.className = "zoomedOutItem";
            item.style.backgroundImage = "url(/images/timeline/" + zoomedOutPointsArray[i] + ")";
            item.style.width = itemWidthArray[i] + "px";

            // Add click event handler to trigger zoom
            item.addEventListener("click", function () {
                if (that._isZoomedOut) {
                    that._triggerZoom();
                }
           }, false);
           this._canvas.appendChild(item);
       }
    }
    this._element.winControl = this;
 },
 {

        // Public properties
        zoomableView: {
            get: function () {
                if (!this._zoomableView) {
                    this._zoomableView = new ZoomableView(this);
                }
            return this._zoomableView;
            }
        },

        // Private properties
        _getPanAxis: function () {
            return "horizontal";
        },

        _configureForZoom: function (isZoomedOut, isCurrentView, triggerZoom, prefectchedPages) {
            this._isZoomedOut = isZoomedOut;
            this._triggerZoom = triggerZoom;
        },

        _setCurrentItem: function (x, y) {
           // Here set the position and focus of the current element
       },

       _beginZoom: function () {
           // Hide the scrollbar and extend the content beyond the viewport
          var scrollOffset = -this._viewport.scrollLeft;
          this._viewport.style.overflowX = "visible";
          this._canvas.style.left = scrollOffset + "px";
          this._viewport.style.overflowY = "visible";
       },

       _getCurrentItem: function () {
              // Get the element with focus
              var focusedElement = document.activeElement;
              focusedElement = this._canvas.firstElementChild;

              // Get the corresponding item for the element
              var /*@override*/item = 1;
              // Get the position of the element with focus
              var pos = {
                     left: focusedElement.offsetLeft + parseInt(this._canvas.style.left, 10),
                     top: focusedElement.offsetTop,
                     width: focusedElement.offsetWidth,
                     height: focusedElement.offsetHeight
              };
           return WinJS.Promise.wrap({ item: item, position: pos });
      },

      _positionItem: function (/*@override*/item, position) {
            // Get the corresponding item for the element
            var year = Math.max(this._start, Math.min(this._end, item)),
            element = this._canvas.children[item];

            //Ensure the element ends up within the viewport
            var viewportWidth = this._viewport.offsetWidth,
            offset = Math.max(0, Math.min(viewportWidth - element.offsetWidth, position.left));

            var scrollPosition = element.offsetLeft - offset;

            // Ensure the scroll position is valid
            var adjustedScrollPosition = Math.max(0, Math.min(this._canvas.offsetWidth - viewportWidth, scrollPosition));

            // Since a zoom is in progress, adjust the div position
            this._canvas.style.left = -adjustedScrollPosition + "px";
            element.focus();

           // Return the adjustment that will be needed to align the item
           return WinJS.Promise.wrap({ x: adjustedScrollPosition - scrollPosition, y: 0 });
    },

    _endZoom: function (isCurrentView, setFocus) {
        // Crop the content again and re-enable the scrollbar
        var scrollOffset = parseInt(this._canvas.style.left, 10);
        this._viewport.style.overflowX = "auto";
        this._canvas.style.left = "0px";
        this._viewport.style.overflowY = "hidden";
        this._viewport.scrollLeft = -scrollOffset;
    },

    _handlePointer: function (pointerId) {
        // Let the viewport handle panning gestures
        this._viewport.msSetPointerCapture(pointerId);
     }
})
})
}

On the html page we add the SemanticZoom control and set its control property zoomFactor and initiallyZoomedOut properties.

The zoomFactor property gets or sets a value that specifies how much scaling the cross-fade animation performs when the SemanticZoom transitions between views. The initiallyZoomedOut property gets or sets a value that indicates whether the control is zoomed out.

In our sample we set the zoomFactor to 0.5 and initiallyZoomedOut to false so that when the page loads, it is in the zoomedIn view.

Under semantic zoom control we add zoomedIn and zoomedOut views so that we can switch from one view to another. In our JavaScript code we define a namespace CustomControls that contains the Timeline control. Our zoomedIn and zoomedOut views will be of type Timeline control. The value of the initialView property of the Timeline control should be true for zoomedIn view and false for zoomedOut view.

Add the custom semantic zoom control to the timeline.html page:

<div id="sezoDiv"
    data-win-control="WinJS.UI.SemanticZoom"
    data-win-options="{ zoomFactor: 0.5, initiallyZoomedOut: false }">
    <div id="ZoomedInDiv"
        data-win-control="CustomControls.Timeline"
        data-win-options="{initialView: true }">
    </div>
    <div id="ZoomedOutDiv"
        data-win-control="CustomControls.Timeline"
        data-win-options="{initialView: false }">
    </div>
</div>

We want the timeline to show as a single straight line with images for events at certain points of time. The following CSS sets up the timeline. Add the following CSS style to the timeline.css file:

.timeline-fragment {
    height: 100%;
    width: 100%;
}
section {
    height: 100%;
    width: 100%;
}
span {
    margin-left: 70px;
}
.win-semanticzoom {
    height: 520px;
}
.timeline-zoomed-in
{
    color: WhiteSmoke;
    height: 100%;
    width: 100%;
}
.timeline-zoomed-out
{
    margin-left: 50px;
}
.viewportStyle {
    position: absolute;
    overflow-x: auto;
    overflow-y: hidden;
    height: 100%;
}
.canvasStyle {
    position: relative;
    overflow: hidden;
    height: 100%;
}
 
.zoomedInItem, .zoomedOutItem {
    width: 215px;
    height: 537px;
    position: relative;
    overflow: hidden;
    float: left;
    background-position: center;
    background-repeat: no-repeat;
}
.scaleStyle {
    background-image: url(/images/timeline/timeline-meter-bottom.png);
    background-repeat: no-repeat;
    position: absolute;
    bottom: 0px;
    width: 1366px;
    height: 109px;
}

App bar


The next feature we want to demonstrate is the app bar, which represents an application toolbar for displaying commands in Windows 8 apps.

In this sample we are using a custom app bar. This is how we add the custom app bar to detail.html page:

<!-- BEGINTEMPLATE: Template code for AppBar -->
<div id="customAppBar" data-win-control="WinJS.UI.AppBar" data-win-options="{layout:'custom',placement:'bottom'}">
    <div id="leftButtonsContainer">
        <div id="addNotes"></div>
        <div id="addImages"></div>
    </div>
    <div id="bookmark"></div>
</div>
<!-- ENDTEMPLATE →>

The layout property of the app bar control gets or sets the layout of the app bar contents and/or sets a value that specifies whether the app bar appears at the top or bottom of the main view. Under the app bar control we added three elements that are custom app bar buttons. Two buttons must be positioned on the left side of the app bar, and they are wrapped inside the leftButtonsContainer div.

In this sample, app bar and app bars buttons have custom background images. We set the background images and button positions in the CSS for the App bar.

App bar styles in detail.css:

#customAppBar {
    background-image: url(/images/app-bar-2.png);
    height: 100px;
}
#leftButtonsContainer {
    width: 150px;
    float: left;
    margin-left: 30px;
}
#addNotes {
    height: 54px;
    width: 54px;
    background-image: url(/images/icon-add-note.png);
    margin-top: 10px;
    float: left;
}
#addImages {
    height: 54px;
    width: 54px;
    background-image: url(/images/icon-add-pic.png);
    float: right;
    margin-top: 10px;
}
#bookmark {
    height: 54px;
    width: 54px;
    background-image: url(/images/icon-bookmark.png);
    float: right;
    margin-right: 30px;
    margin-top: 12px;
}

Swipe Select


In this scenario when an app bar button is clicked, a flyout (popup) pops up. The flyout contains a list view with four images in it. Users can select any of the four images by using swipe select, which is a feature implemented by list view.

WinJS.UI.ListView displays data items in a customizable list or grid. The ListView control has a property called swipeBehavior. ListView.swipeBehavior gets or sets how the ListView reacts to the swipe gesture. The swipe gesture can select the swiped items or have no effect on the current selection.

In the following code we add a flyout and a ListView within it in the detail.html file:

<!--Add image flyout-->
<div id="addImageFlyout" class="addImageFlyout" data-win-control="WinJS.UI.Flyout">

    <!--Template for the listView within the flyout-->
    <div id="listViewItemTemplate" data-win-control="WinJS.Binding.Template">
        <div class="listViewItem">
            <img src="#" class="listViewItemTemplate-Image" data-win-bind="src: picture" />
        </div>
    </div>
    <!--End listView template-->

    <!--ListView -->
    <div id="listView"
    data-win-control="WinJS.UI.ListView"
    data-win-options="{
    itemDataSource : Data.itemList.dataSource,
    itemTemplate: select('#listViewItemTemplate'),
    selectionMode: 'single',
    tapBehavior: 'none',
    swipeBehavior: 'select',
    layout: { type: WinJS.UI.GridLayout }
    }">
    </div>
<!--ListView end-->

</div>
<!--Flyout end-->

WinJS.UI.Flyout displays lightweight UI that is either information or requires user interaction. Unlike a dialog, a Flyout can be light dismissed by clicking or tapping off of it.

To enable swipe select, set the swipeBehavior property of list view to “select” and the selectionMode property should not be “none.” Instead, it should either be “single” or “multi.”

Notice that we are setting itemDataSource of the list view to Data.itemList.dataSour. ListView.itemDataSource property gets or sets the data source that provides the ListView with items. To show the images in list view, we need to create a binding list that contains the source of the images that we will display. WinJS.Binding.List object represents a list of objects that can be accessed by an index or by a string key. Provides methods to search, sort, filter, and manipulate the data.

This is how we create the binding list for the list view in data.js file:

// Create an array of images that will appear in the listview inside addImage flyout
var myDataImages = new WinJS.Binding.List([
    { picture: "images/icon-add-pic.png" },
    { picture: "images/icon-add-pic.png" },
    { picture: "images/icon-add-pic.png" },
    { picture: "images/icon-add-pic.png" },
]);
 
// Create a namespace to make the data publicly
// accessible.
var publicMembers = {
    itemList: myDataImages
};

WinJS.Namespace.define("Data", publicMembers);

Now we add a click event handler for the “Add Image” button on the app bar, which brings up the flyout that contains the list view. Once the flyout is displayed, we call the forceLayout method on the listView control to make sure all the images are visible inside the ListView because while flyout is hiding the ListView control inside it, the flyout is also hidden. ListView.forceLayout method forces the ListView to update its layout. Use this function when making the ListView visible again after its style.display property had been set to "none."

Add the event handler to the detail.jsl file:

var addImageButton = document.getElementById("addImages");
var listView = element.querySelector("#listView").winControl;

addImageButton.addEventListener("click", function (e) {

    // On "Add Image" button click show the flyout with image thumb nails in list view
    var addImageFlyout = document.getElementById("addImageFlyout");
    var anchor = document.getElementsByTagName("body");
    addImageFlyout.winControl.show(anchor[0], "", "left");

    // Set the position of the flyout
    addImageFlyout.style.bottom = "100px";
    addImageFlyout.style.left = "130px";
    listView.forceLayout();
});

Add the appropriate style to the flyout and listView controls in the detail.css file:

.addImageFlyout {
    background-image: url(/images/flyout-add-image.png);
    height: 492px;
    width: 488px;
    background-repeat:no-repeat;
}
/* Template for items in the ListView */
.listViewItem
{
    width: 150px;
    height: 150px;
}
listViewItemTemplate-Image {
    width: 140px;
    height: 140px;
    margin: 5px;
}
/* CSS applied to the ListView */
#listView{
    width: 450px;
    height: 400px;
    border: solid 2px rgba(0, 0, 0, 0.13);
}

Share


To set up your application as a share source app, you first need to get the instance of the DataTransferManager class that’s been assigned to the current window.

Windows.ApplicationModel.DataTransfer.DataTransferManager.getForCurrentView() returns the DataTransferManager object associated with the current window.

This class supports a datarequested event, which is fired when a user presses the Share charm. Your app needs to listen for this event to know when the user wants to share data from your app. To do this, add the event handler onShareRequested to the datarequested event.

In the onShareRequested handler we create an html fragment string shareHtml. Then we pass the string to the Windows.ApplicationModel.DataTransfer.HtmlFormatHelper.createHtmlFormat(shareHtml) method, which returns a string representing the formatted HTML. This method takes a string that represents HTML content and adds the necessary headers to ensure it is formatted correctly for share and clipboard operations.

In the next step we add the html content to the data package. To share images, we create a random access stream around the image uri by calling the Windows.Storage.Streams.RandomAccessStreamReference.createFromUr(uri) method . We need to set the share email title as well.

When unloading the page, we must disable the share by setting the ondatarequested event handler to null.

In the share.js file:

/* Methods */
var Enable = function () {
    var dataTransferManager =             Windows.ApplicationModel.DataTransfer.DataTransferManager.getForCurrentView();
    dataTransferManager.ondatarequested = onShareRequested;
};

var Disable = function () {
    var dataTransferManager =     Windows.ApplicationModel.DataTransfer.DataTransferManager.getForCurrentView();
    dataTransferManager.ondatarequested = null;
};
/* Private methods */
function onShareRequested(e) {
    var request = e.request;
    // Construct the html fragment that will be shared
    var description = "Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor         incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.";

    description = description.length > 250 ? description.substring(0, 200) + '...' : description;

    var shareHtml = '<div style="width:100%; overflow:hidden; display:table;">' +
'<div style="padding:0 25px 0 0; width:35%; display:table-cell; vertical-align:top; 
;">' +
'<img src="ms-appx://' + "/images/icon-add-pic.png" + '" style="border:1px solid #ccc;"/></div>' +
'<p>' + description + '</p>' +
'</div>';
    // Format the html fragment
    var obj = Windows.ApplicationModel.DataTransfer.HtmlFormatHelper.createHtmlFormat(shareHtml);

    //Adds HTML content to the DataPackage.
    request.data.setHtmlFormat(obj);

    // If there are images in the html fragment then we will need to create the random access stream around     specified uri
    var streamRef = Windows.Storage.Streams.RandomAccessStreamReference.createFromUri(new     Windows.Foundation.Uri("ms-appx://" + "/images/icon-add-pic.png"));
    request.data.resourceMap["ms-appx://" + "/images/icon-add-pic.png"] = streamRef;

    // Set the email title
    request.data.properties.title = "The New World (Litware)"; // required
}

About Ratio

Ratio is a leading multi-screen agency that partners with global brands to create seamless experiences across all platforms. We deliver multi-screen apps that provide consistent and optimized user experiences across the web, mobile, tablet, Connected TV, and most recently the console ecosystem specifically using our CypressX product which allows media brands to launch differentiated apps quickly on the Xbox LIVE platform. Ratio’s specialized team combines product strategy with compelling design and deep technical expertise to deliver award-winning applications for our clients that include AT&T, Condé Nast, Intel, Meredith, Microsoft, NASDAQ and Time Warner. Founded in 2001, Ratio is privately held and headquartered in Seattle, WA. To learn more about Ratio, visit http://www.WeAreRatio.com or follow the company on Twitter @teamratio.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

How to write a 2 in 1aware application

$
0
0

Dynamically adapting your UI to 2 in 1 configuration changes

By: Stevan Rogers and Jamel Tayeb

Downloads


How to write a 2 in 1 aware application [PDF 596KB]

Introduction


With the introduction of 2 in 1 devices, applications need to be able to toggle between “laptop mode” and “tablet mode” to provide the best possible user experience. Touch-optimized applications are much easier to use in “tablet mode” (without a mouse or keyboard), than applications originally written for use in “laptop mode” or “desktop mode” (with a mouse and keyboard). It is critical for applications to know when the device mode has changed and toggle between the two modes dynamically.

This paper describes a mechanism for detecting mode changes in a Windows* 8 or Windows* 8.1 Desktop application, and provides code examples from a sample application that was enhanced to provide this functionality.

Basic Concepts


Desktop Apps vs. Touch-Optimized Apps

Most people are familiar with Desktop mode applications. Windows* XP and Windows* 7 applications are the examples. These types of apps commonly use a mouse and a keyboard to input data, and often have very small icons to click on, menus that contain many items, sub-menus, etc. These items are usually too small and close together to be selected effectively using a touch interface.

Touch-optimized applications are developed with the touch interface in mind from the start. The icons are normally larger, and the number of small items are kept to a minimum. These optimizations to the user interface make using touch-based devices much easier. With these UI elements correctly sized, you should extend this attention to the usability of objects the application is handling. Therefore, graphic objects representing these items should also be adapted dynamically.

The original MTI application

The MTI (MultiTouchInterface) sample application was originally written as part of the Intel® Energy Checker SDK (see Additional Resources) to demonstrate (among many other things) how the ambient light sensors can be used to change the application interface.

At its core, the MTI sample application allows the user to draw and manipulate a Bézier curve. The user simply defines in order, the first anchor point, the first control point, the second anchor point, and finally the second and last control point.

Figure 1 shows an example of a Bézier curve. Note that the size and color of each graphic element are designed to allow quick recognition—even via a computer vision system if required—and easy manipulation using touch.

  • Anchor points are square and tan.
  • Control points are round and red.
  • The segment joining an anchor point to its control point is blue.
  • The Bézier curve is black.


Figure 1. Select control and anchor points to draw Bezier curve.

Figure 2, Figure 3, Figure 4, and Figure 5 show the key interactions the user can have with the Bézier curve. An extra touch to the screen allows redrawing a new curve.


Figure 2. A green vector shows the displacement of the control point.


Figure 3. A grey arc shows the rotation of the Bezier curve.


Figure 4. A green vector shows the change of the Bezier curve placement onscreen.


Figure 5. Scale the Bezier curve with two fingers.

Support for Ambient Light Sensors (ALS) was added to the MTI sample application. Once the level of light is determined, the display dynamically changes to make it easier for the user to see and use the application in varying light situations. Microsoft recommends increasing the size of UI objects and color contrast as illumination increases.

MTI changed the interface in numerous stages, according to the light level. In a bright light situation, the MTI application changes the display to “high contrast” mode, increasing the size of the anchor and control points and fading the colors progressively to black and white. In a lower light situation, the application displays a more colorful (less contrasted) interface, with smaller anchor and control points.

Indeed, anyone who has used a device with an LCD screen, even with backlight, knows it may be difficult to read the screen on a sunny day. Figures 6 and Figure 7 show the issue clearly.


Figure 6. Sample with low ALS setting in full sunlight (control points indicated on right).


Figure 7. Sample with full ALS setting in full sunlight.

In our case, we decided to re-use the size change mechanism that we implemented for the ALS support. We are using only the two extremes of the display changes for the UI objects’ size that were introduced for the ALS support. We do this, simply by setting the UI objects’ size to the minimum when the system is in non-tablet mode, and to the maximum when it is in tablet mode (by convention, the unknown mode maps to the non-tablet mode).

Modified MTI (aka: Bezier_MTI)

Using the two extremes of the display shown above, the original MTI source code was modified to add new capabilities to toggle between the two contrast extremes based on a certain event. The event used to toggle between the two contrast extremes is the switch between tablet mode and laptop mode of a 2 in 1 device. Switches in the hardware signal the device configuration change to the software (Figure 8).


Figure 8. Notification process. All elements must be present.

Upon starting the Bezier_MTI application, the initial status of the device is unknown (Figure 9). This is because the output of the API used to retrieve the configuration, is valid only when a switch notification has been received. At any other time, the output of the API is undefined.

Note that only the first notification is required since an application can memorize that it received a notification using a registry value. With this memorization mechanism, at next start, the application could detect its state using the API. If the application knows that it has received a notification in the past on this platform, then it can use the GetSystemMetrics function to detect its initial state. Such mechanism is not implemented in this sample.


Figure 9. State machine.

When the mode of the device is changed, Windows sends a WM_SETTINGCHANGE message to the top level window only, with “ConvertibleSlateMode” in the LPARAM parameter. Bezier_MTI detects the configuration change notification from the OS via this message.

If LPARAM points to a string equal to “ConvertibleSlateMode”, then the app should call GetSystemMetrics(SM_CONVERTIBLESLATEMODE). A “0” returned means it is in tablet mode. A “1” returned means it is in non-tablet mode (Figure 10).

	...
	
	//---------------------------------------------------------------------
	// Process system setting update.
	//---------------------------------------------------------------------
	case WM_SETTINGCHANGE:
	
    //-----------------------------------------------------------------
	   // Check slate status.
	   //-----------------------------------------------------------------
	   if(
	      ((TCHAR *)lparam != NULL) &&
	      (
	         _tcsnccmp(
	            (TCHAR *)lparam,
	            CONVERTIBLE_SLATE_MODE_STRING,
	            _tcslen(CONVERTIBLE_SLATE_MODE_STRING)
	         ) == 0
	       )
	   ) {
	
	      //-------------------------------------------------------------
	      // Note:
	      //    SM_CONVERTIBLESLATEMODE reflects the state of the 
	      // laptop or slate mode. When this system metric changes,
	      // the system sends a broadcast message via WM_SETTING...
	      // CHANGE with "ConvertibleSlateMode" in the LPARAM.
	      // Source: MSDN.
	      //-------------------------------------------------------------
	      ret = GetSystemMetrics(SM_CONVERTIBLESLATEMODE);
	      if(ret == 0) {
	         data._2_in_1_data.device_configuration = 
	            DEVICE_CONFIGURATION_TABLET
	         ;
	      } else {
	         data._2_in_1_data.device_configuration = 
	            DEVICE_CONFIGURATION_NON_TABLET
	         ;
	      }
	...

Figure 10. Code example for detecting device mode change.

As good practice, Bezier_MTI includes an override button to manually set the device mode. The button is displayed as a Question Mark (Figure 11) at application startup; then changes to a Phone icon (Figure 12) or a Desktop icon (Figure 13) depending on the device mode at the time. The user is able to touch the icon to manually override the detected display mode. The application display changes according to the mode selected/detected. Note that in this sample, the mode annunciator is conveniently used as a manual override button.


Figure 11. Device status unknown.

A phone icon is displayed in tablet mode.


Figure 12. Note the larger control points.

A desktop icon is displayed in non-tablet mode.


Figure 13. Note the smaller control points.

How do I notify my application of a device configuration change?

Most of the changes in this sample are graphics related. An adaptive UI should also change the nature and the number of the functions exposed to the user (this is not covered in this sample).

For the graphics, you should disassociate the graphics rendering code from the management code. Here, the drawing of the Bezier and other UI elements are separated from the geometry data computation.

In the graphics rendering code, you should avoid using static GDI objects. For example, the pens and brushes should be re-created each time a new drawing is performed, so the parameters can be adapted to the current status, or more generally to any sensor information. If no changes occur, there is no need to re-create the objects.

This way, as in the sample, the size of the UI elements adapt automatically to the device configuration readings. This not only impacts the color, but also the objects’ size. Note that the system display’s DPI (dots per inch) should be taken in account during the design of this feature. Indeed, small form factor devices have high DPI. This is not a new consideration, but it becomes more important as device display DPI is increasing.

In our case, we decided to re-use the size change mechanism that we implemented for the ALS support (Figure 14). We do this simply by setting the UI objects’ size to the minimum when the system is in non-tablet mode and to the maximum when it is in tablet mode (by convention, the unknown mode maps to the non-tablet mode).

	...
	ret = GetSystemMetrics(SM_CONVERTIBLESLATEMODE);
	   if(ret == 0) {
	      data._2_in_1_data.device_configuration = 
	      DEVICE_CONFIGURATION_TABLET
	      ;
	         //---------------------------------------------------------
	         shared_data.lux = MAX_LUX_VALUE;
	         shared_data.light_coefficient = NORMALIZE_LUX(shared_data.lux);
	
	   } else {
	         data._2_in_1_data.device_configuration = 
	            DEVICE_CONFIGURATION_NON_TABLET
	         ;
	         //---------------------------------------------------------
	      shared_data.lux = MIN_LUX_VALUE;
	      shared_data.light_coefficient = NORMALIZE_LUX(shared_data.lux);
	      }
	...

Figure 14. Code example for changing the UI.

The following code (Figure 15) shows how a set of macros makes this automatic. These macros are then used in the sample’s drawing functions.

	...
	   #define MTI_SAMPLE_ADAPT_TO_LIGHT(v) 
	    ((v) + ((int)(shared_data.light_coefficient * (double)(v))))
	
	   #ifdef __MTI_SAMPLE_LINEAR_COLOR_SCALE__
	   #define MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT 
	      (1.0 - shared_data.light_coefficient)
	   #else // __MTI_SAMPLE_LINEAR_COLOR_SCALE__
	      #define MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT 
	         (log10(MAX_LUX_VALUE - shared_data.lux))
	   #endif // __MTI_SAMPLE_LINEAR_COLOR_SCALE__
	
	   #define MTI_SAMPLE_ADAPT_RGB_TO_LIGHT(r, g, b) 
	   RGB( 
	    (int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(r))), 
	    (int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(g))), 
	    (int)(MTI_SAMPLE_LIGHT_COLOR_COEFFICIENT * ((double)(b))) 
	...

Figure 15. Macro example.

Conclusion


Windows 8 and Windows 8.1 user interface allows developers to customize the user experience for 2 in 1 devices. The device usage mode change can be detected, and the application interface changed dynamically, resulting in a better user experience for the user.

About the Authors


Stevan Rogers has been with Intel for over 20 years. He specializes in systems configuration and lab management and develops marketing materials for mobile devices using Line Of Business applications.

Jamel Tayeb is the architect for the Intel® Energy Checker SDK. Jamel Tayeb is a software engineer in Intel's Software and Services Group. He has held a variety of engineering, marketing and PR roles over his 10 years at Intel. Jamel has been worked with enterprise and telecommunications hardware and software companies in optimizing and porting applications for/to Intel platforms, including Itanium and Xeon processors. Most recently, Jamel has been involved with several energy-efficiency projects at Intel. Prior to reaching Intel, Jamel was a professional journalist. Jamel earned a PhD in Computer Science from Université de Valenciennes, a Post-graduate diploma in Artificial Intelligence from Université Paris 8, and a Professional Journalist Diploma from CFPJ (Centre de formation et de perfectionnement des journalistes – Paris Ecole du Louvre).

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries.
*Other names and brands may be claimed as the property of others
Copyright© 2013 Intel Corporation. All rights reserved.


Power Explorer

$
0
0

This article, sample code and whitepaper were produced by Leigh Davies (Intel) who examines how to optimize your graphics applications such that they self adjust their workload on CPU/GPU to prolong a system's battery life while maintaining an acceptable visual quality for the user.


Today any review of a new processor whether it’s used in a desktop computer, a laptop, a tablet or a phone will contain lots of information about how efficient it is and the new technologies that have been used to achieve this performance. Operating system developers spend large amounts of time optimizing improve efficiency and extend battery life, but what can be done by someone who is designing an application and wants to ensure it runs as efficiently as possible? The aim of this sample is to provide insight into how features in a game can affect the power efficiency of the hardware it’s running on including the importance of frame rate capping, the effect of bandwidth on power and the cost of running asynchronous CPU work. The sample also demonstrates a way an application can adjust its workload to prolong a system’s battery life when it detects a change from AC power to battery, how aggressive the change is can be adjusted based on the currently active windows power scheme.



Figure 1: Power Explorer with onscreen power information from Intel Power Gadget 2.7

The core of the sample was designed around the idea used in Codemasters GRID2* that an application can and should adapt its behavior based on whether it’s running on battery or AC power. We also wanted to show a way of adapting this behavior changed to reflect how the user had their system configured so the decision was taken to tie the amount of adaption into the currently active windows power scheme. As well as extending the system’s battery life when not running on AC power that also allows the application to adapt to the fact the hardware’s performance will change based on the power scheme. As the sample was created it became clear that to tell the complete story of how and why a game needs to adapt based on the power source we would need the ability to accurately display power information in real time and allow the user to experiment with a wide range of graphics options to see how they affect power as not all optimizations will be applicable to all titles. The result the sample can be split into 3 main areas.

  1. Windows Power API’s for measuring battery life, capturing system notifications on power changes and information on the current Windows Power Scheme. 
  2. Integration of Intel Power Gadget 2.7 and the information this allows to be displayed on screen. 
  3. A sandbox showing the effect the various graphics options have on power draw and the interaction between each other. The main effects that can be adjusted are: 
  • VSync Rate Backbuffer Format ( a choice of 32 Bit and 64Bit formats and MSAA) 
  • HDR pipeline ( Tone mapping and bloom) 
  • Resolution ( Using a simple upscale post process) 
  • CPU workload and thread usage 
  • Shader workload. 
  • Tessellation level. 

More details can be found in the accompanying article that can be found below. As a final example of the effect that adjusting settings can have running the sample on a 47Watt 4950HQ system (See Table 1) gave the following results with over a 3 fold reduction in power draw between maximum visual quality and power saving options



Figure 2: Power comparison of different settings

The main things that significantly affected the power draw of the same can be summarized as:

* Frame rate, limiting the maximum frame rate is the single most important optimization regarding power.

* Bandwidth, back buffer formats and MSAA affect almost every rendering call, the less bandwidth the better.

* Limit CPU work that doesn’t provide tangible benefits on modern systems to allow more power for the GPU, avoid spin locks and unnecessary polling, optimize time consuming functions even on none critical threads, just because the CPU is fast enough to do the work without stalling another part of the code doesn’t mean it’s an efficient use of the power budget.

* Balance shader cost against visual quality, excessive tessellation or highly complex pixel shaders that provide only minimal visual benefits can significantly impact on power draw.

Table 1: Hardware Spec used in testing

Intel® App Innovation Contest 2013 Developer Insights Interview Series - Dave Butler

$
0
0

Discussing MasterCard’s OpenAPI platform with business leader Dave Butler

Our second MasterCard Q&A tied to the Intel App Innovation Contest 2013

- Marc Saltzman

In case you missed our chat with MasterCard’s Garry Lyons, we spoke about creating engaging financial and retail apps for today’s mobile devices -- including inspirational advice for software developers taking part in the Intel® App Innovation Contest 2013 [http://aic2013.intel.com].

To round out the discussion, we also touched based with Dave Butler, VP and Senior Business Leader at MasterCard, who runs the company’s worldwide OpenAPI platform.

Marc Saltzman: Many people see MasterCard as a financial services provider, but your colleague Garry Lyons told me MasterCard has always been a technology company.

Dave Butler: We were recently at a developer-centric conference at the Moscone Center [in San Francisco] and they had a huge presentation wall – one side for Android developers and the other side for iOS developers.  We had a MasterCard stand right on the corner of the area set aside for the Android presentations. I lost count of the number of developers that came up and said “Why is MasterCard here? You’re a credit card company.” It’s a great opening line for me because I can explain that our company does not issue credit cards. What we do is to facilitate the use of cards in payment transactions. We handle the transactions taking place between the key players -- we run a payment network, a network that is second to none in terms of capacity, speed and availability. We see it as a tremendous technology asset that is at the core of our business.

MS: Speaking of technology, you wrote in a recent blog post “We’ve become spoiled by the continuous burst of innovative new apps that make our smartphones, tablets and computers even more useful and entertaining.” What would you be looking for from app developers creating compelling finance- or retail-focused mobile apps?

DB: As a developer, I think that the OpenAPI [https://developer.mastercard.com/portal/display/api/API] movement is tremendously exciting because there’s now a good chance that I can find an existing service close enough to what I actually need when I’m building a mobile application.. In other words, instead of reinventing the wheel, I can find someone who already built what I need. Yes, I might have to pay to use the service, but it’ll be a lot cheaper than building it myself. It means that I can focus on the crucial task, which is building the client side of my mobile application. By using API services, I don’t have to divert so much of my attention away from the user-interface, and I can really go to town on the user-experience, which, frankly, is how people will judge my app anyway. MasterCard is giving people access to that payment network I just described and to other related services. By doing so we are encouraging and enabling people to think about brand new types of applications. Other companies are also offering OpenAPIs. So developers are no longer so constrained, forced to spend a fortune to build their own services. From MasterCard’s perspective, if we do a good job with our OpenAPI, developers can get cost effective access to our services, services that can be the core to innovative mobile apps that deliver tremendous value to people with great user experiences.

MS: How important is the trend of developing cross-platform apps using HTML5?

DB: I think it comes down to the user experience that you’re delivering. If you’re building something like a graphically-intensive game where you need an instant, high-speed response, not burdened by unnecessary latency, then you’re probably going to have to build a native app -- regardless of what platform you’re developing for. If you’re building a more general application, we’re rapidly approaching a time where most web browsers on smart devices are fast enough to create a good HTML5 experience. Devices are getting a lot more powerful and I think the web browser will be the go-to platform you develop applications for unless you have special requirements.

MS: What can today’s developers learn from older programs or technologies that could help improve their apps?

DB: Hmm, that’s an interesting one. The big thing is to realize that change is rapid in this business. Maximizing your productivity as an architect and a developer is always going to be key. If you are already familiar with a set of technologies that you can hook together and get a solution out the door, then when it comes to being successful, it’s probably a good idea to go with what you know. However, there comes a time when new technologies can deliver a quantum leap in capabilities. I think all of us in this business have to assume we’ll be learning a lot of new technologies during our careers.  I am familiar with a good set of tools and technologies. But I also keep my eyes open for new developments. And then we also have to deal with disruptive events such as the appearance of the iPhone. But while the technologies might change, I haven’t seen the need for the basic stuff I learnt in the beginning – testing, simplicity in code, etc., going away. So stay nimble and realize that while some of what you are doing now will soon enough be considered old you get to keep some of the knowledge and experience you pick up regardless.

MS: What is the single best piece of advice you can offer an app developer who is just starting out?

DB: Just keep learning. If I want to pick up a new technology, I try to code a side-project just to get familiar with it and then I can feed that into my professional life.


For additional development resources visit Intel® Developer Zone

http://software.intel.com/en-us/windows

Intel® App Innovation Contest 2013 Developer Insights Interview Series - Garry Lyons

$
0
0

MasterCard’s Garry Lyons on app innovation, mobile commerce and the value of data

A Q&A on developing finance and retail-related apps for mobile devices

- Marc Saltzman

The Intel® App Innovation Contest 2013 [http://aic2013.intel.com] is challenging developers to create innovative Windows* 8 apps for new tablets and All-in-One devices.

By providing more than $100,000 in cash prizes, the goal is to inspire developers to continue the evolution of computing with creative solutions to consumer and enterprise challenges.

We spoke to Garry Lyons, MasterCard's Chief Innovation Officer, who shared some information on the kinds of innovative apps powered by MasterCard, what developers can learn from them and how “data” is king.

Marc Saltzman: MasterCard seems to be transforming from a financial services provider to a technology company. What’s the catalyst behind this transformation?

Garry Lyons: We’ve always been a technology company. We facilitate and secure electronic payments between buyers and sellers in 150 currencies in 210 countries. Right now there are 2 billion MasterCard payments cards that can be used at 36 million merchant locations around the world. In terms of the technological transition, it has been more pronounced of late as technology has been evolving at a phenomenal rate and it’s enabling us to provided more advanced solutions to consumers, merchant, governments and financial institutions to create efficiencies or solve problems that traditionally would have been very difficult to do.

MS: Whether it’s for mobile devices like smartphones and tablets, or laptops and all-in-one PCs, how important are mobile apps to MasterCard?

GL: Mobile is probably the most prominent touch-point for consumers. At MasterCard, we believe in time, every device will be a commerce device. We see our role as facilitating commerce on these devices and continuing to make payments safe, simple and smart. Mobile in particular is enabling us to deliver more value to the customer. To give you an example of that, no consumer wakes up in the morning saying “I’m really looking forward to paying” – instead, they’re looking forward to doing something, such as having breakfast in the local coffee shop, getting a taxi into the office or tickets to a favorite team’s game. Instead of focusing on the payment, they’re focusing on the activity, and so we see mobile technology and innovation can help make that easier. We are focused on making lives easier and technology can play a big part in this.

MS: Speaking of easier and safer, there are a large number of people who are reluctant to shop on their phone, to conduct a financial transaction. Can MasterCard help alleviate these concerns through mobile apps and technologies?

GL: Absolutely, for starters, there are many, many ways to use your phone as a payment device. When we say making payments “safe, simple and smart,” it’s paramount that whether it’s a mobile PayPass transaction or a cloud-based transaction, as we transition from plastic to mobile and beyond, we retain the security aspect of it. By way of example, today, tapping your mobile phone to make a contactless PayPass transaction is analogous to tapping your card and has the same security characteristics as a card-based contactless transaction. We’re not stopping there - check out http://newsroom.mastercard.com/press-releases/mastercard-visa-and-american-express-propose-new-global-standard-to-make-online-and-mobile-shopping-simpler-and-safer/ to see how we’re making e-commerce and mobile shopping simpler and safer than ever before.

MS: When thinking about the retail category, both from the in-store consumer experience and from the corporate management standpoint for finance, what type of evolutionary and revolutionary opportunities do you think the tablet and 2 in 1 devices make possible?

GL: If you think about it from a merchant’s perspective, they create opportunity in terms of mobility. You don’t need to queue at a check-out -- you can do things like order ahead or in-aisle checkout. If the consumer has a device, you can know they’re in the store and personalize both the shopping experience and the checkout experience. Today, with a piece of plastic, when a consumer walks into the store, no one knows they’re there. So there’s no way it can enhance the consumer’s experience or deliver more value to the merchant. With mobile technology on a phone or tablet, consumers can choose to make a merchant aware they’re in store and can see new offers, recommendations, items their friends bought and so on. The retailer can know what a customer likes, how regular of a customer they are, what they’ve purchased previously, their loyalty program, and so much more. So technology allows you to create better experiences for the customer – again, before, during and after the transaction. It goes beyond just the payment. If you think about the consumer experience, we recently launched MasterPass [ http://newsroom.mastercard.com/digital-press-kits/masterpass-digital-wallet-now-every-device-is-a-shopping-device/], which is an omni-channel checkout platform. It allows you to use your same credentials whether you’re checking out in-store, on your mobile phone or tablet, or making a purchase in-app.

MS: Very interesting - and convenient.

GL: Yes. The technology creates a new experience. And another example of that is something we recently unveiled at World Retail Congress.  It’s called ShopThis! with MasterPass [link: http://newsroom.mastercard.com/news-briefs/in-content-one-touch-shopping-has-arrived/ ], which allows you to get physical goods from digital content. How many times have you been reading an online magazine or digital newspaper or watching a movie on a tablet and thought “I’d really like to buy that”? It could be from an advertisement or from the editorial. We’ve created a technology that allows the consumers to purchase the item there and then without leaving the digital content. It’s a capability we’ve created that can be embedded inside an app, a digital magazine or even on a website. For the consumer it gives them instant gratification, the ability to buy that product. This is just one of the many, many examples of how technology changes the game.

MS: This will require a MasterCard, no doubt. Do alternative payment services pose a threat or opportunity? The taxi I was in last week used Square on an iPhone. Does this complement MasterCard or help them?

GL: Actually our MasterPass solution supports all card brands, not just MasterCard. But back to your question, absolutely, these types of solutions are complementary. The reality is if the taxi driver didn’t have the capability of accepting cards, you wouldn’t be able to use your credit or debit card to pay for the taxi. The world is moving towards a digital future, where there are different experiences and checkout opportunities for the consumer and you’ll use the appropriate methodology at the appropriate time. Sometimes you may use your card, sometimes you’ll tap your phone or use the MasterPass solution. For example, if you went into a coffee shop today and you decided to use PayPass [ http://www.mastercard.us/paypass.html ], you’d tap your card and leave – it’s a fantastic experience. But fast-forward to Friday night and you want a beer and a hotdog at a baseball or football game and you don’t want to get out of your seat and miss the action. Using technology we developed called QkRTM [ http://newsroom.mastercard.com/tag/QkR/ ] (pronounced “Quicker”) built on the MasterPass APIs, you can order food from your phone and have it delivered directly to your seat -- using your secured credentials stored in the cloud [ http://newsroom.mastercard.com/videos/morning-brew-digital-shopping-and-new-ways-to-pay/ ]. We recognize that over time plastic will become less prevalent than mobile devices, but this is going to take some time, and consumers will ultimately make that decision.

MS: Globally speaking, what are some important consumer trends that you think developers should be aware of as they consider developing finance and retail mobile apps?

GL: Personalization and relevance are still very important to consumers and big data plays a large part in delivering this -- the ability to combine data sources allows you to provide greater relevance to consumers and merchants. I think consumers are getting more and more comfortable with technology and have huge expectations as to what technology can do for them. So you need to think about how to use data, with technology, to provide the optimum experience for consumers. I do think it’s using technology and data to create relevance. It could mean I get the appropriate offer at the appropriate time. It could mean getting the appropriate recommendation at the right time. For example, if someone knew I was flying into New York City tonight and they knew I was a big fan of Chinese food and they knew where I was staying – this data should be used to create the appropriate restaurant recommendations for me near my hotel. If it’s done poorly, such as an offer in the wrong city or for a food type that I don’t like or the consumer experience is bad, the consumer is likely to delete the app or not use the service again. Emerging markets are becoming even more important and smart phones are going to become ubiquitous everywhere. Q1 2013 was the first quarter where more smartphones were shipped than feature phones for the first time ever. So the apps you write will have an even bigger target market but depending on the app you are writing, you may need to consider the nuances of markets that would not historically have been a target for your app.

MS: Speaking of relevance, what are some of the more exciting apps MasterCard has available for digital devices? Are they all commerce related?

GL: Many of them are. We have launched a lot of commerce apps and there are ones we’re piloting but haven’t launched and I can’t speak to those yet. I talked about QkR in stadiums. And we’re live in Australia with QkR, where this mobile application is used by parents to pay for kids’ lunches at school. Now, while that might not sound like the most interesting thing in the world, today in these schools, parents give their kids cash, but there’s a chance the cash can be lost or taken from them or a chance  the children might eat unhealthy food with that money. From an efficiency perspective, the school doesn’t know what foods need preparing until the kids show up with the cash that morning. Now, with the QkR application, parents can order food ahead of time for their children, for the whole week. That means they don’t need to use cash, there’s no chance of a child losing money and helps the school from an efficiency standpoint. So, every day, the school can see from our system what they need to make, say, 200 tuna sandwiches, 50 waters, 150 diet sodas and 100 bags of chips. They pack up brown bags and the kids are all set. QkR also lets parents pay for school books or other activities, not just lunch. It really makes it easy for parents to take care of their kids’ school needs

MS: How important is the trend of developing cross-platform apps using HTML5?

GL: I think the jury is still out on that. We do both at the moment. We use HTML5 because it’s quick to develop for and runs on many platforms – PCs, tablets and a suite of phones -- but the [native] app today gives you a very tailored experience. For now, we’re going down both tracks.

MS: Can an app developer that creates a finance-based app gain trust by users – even with an unknown brand – or is it important to partner with a high-profile and trusted partner, like MasterCard?

GL: I’m going to say it’s an awful lot easier to gain credibility by partnering with a globally trusted brand. Developers looking to build an app should always look for developer friendly services that make writing your apps easier. There are many services out there that will lessen the effort it takes to write the app and shorten the time it takes to get into the wild. By way of example, we’ve recently launched a developer-focused platform called Simplified Commerce (http://www.simplify.com ) that can enable anyone to start accepting and processing payments in a matter of minutes. You get sample code and we take you through all the steps a developer needs to go through to start accepting payments on a website or from their mobile application. We support most of the popular languages like Java, PHP, C#, Python and Ruby, and have SDK support for iOS and Android. If a partner like MasterCard can provide you with all the tools you need, why would you try and build your own approach to accepting payments – and of course we deal with all of the PCI and security requirements.

MS: For an app developer starting out, what is the single best piece of advice you can offer?

GL: Dream big and don’t be afraid to try something that’s going to fail. Set out with the ambition that your app is going to be used by every single consumer on the planet. If you only get halfway there you’re going to be more successful than anyone else. Also, it’s good if your idea is solving a known problem, creating efficiency or making someone’s life easier.

MS: Is it possible for an app developer to build upon an existing concept or should they focus on creating something radically different? Facebook built upon other social networks like MySpace and made it so much more. Is that ok?

GL: Absolutely. There’s always going to be evolution and revolution. Innovation is about making something better, so if you spot an opportunity to improve upon something that already exists, go for it!


For additional development resources visit Intel® Developer Zone

http://software.intel.com/en-us/windows

Unity* 3D Touch GUI Widgets

$
0
0

By Lynn Thompson

Downloads

Download Unity* 3D Touch GUI Widgets [PDF 966KB]
Source Code: (coming soon!)

This article provides an example of using Unity* 3D assets to simulate Windows* graphical user interface (GUI) widgets. The TouchScript package available at no cost from the Unity Asset Store makes several touch gestures available. The example in this article uses the Press Gesture and Release Gesture.

The example begins with a preconfigured scene that has geometry for visual reference points and a capsule asset with a first-person shooter (FPS) controller and the scene’s main camera as a child for navigating the geometry. The scene is initially navigated in the default method, where an FPSController takes its input from the keyboard. To provide an alternate means of input for the FPS controller, I construct a GUI widget out of Unity 3D cubes. This widget will have four buttons: Up, Down, Left, and Right. I make the GUI widget visible on the screen by placing it in view of a second-scene camera with an altered normalized view port rectangle and a higher depth setting than the main camera. When these buttons receive a touch gesture, a respective action is sent to the FPS input controller to generate the desired effect.

I construct an additional four button widget to control rotation of the capsule asset that holds the scene’s main camera. These buttons enable users to manipulate the “first person’s” rotation independently of the FPSController, moving the asset at the same time. This functionality uses the Press and Release Gestures.

When complete, running this simulation allows users to touch multiple buttons in varying patterns to navigate the scene. After this, I explore possibilities for how you can use and modify the GUI widgets developed in this example to emulate other, common gaming controllers. I also discuss the challenges I experienced using touch-based controllers in contrast to keyboard and handheld controllers.

Creating the Example

I begin this example by importing a preconfigured scene I exported from Autodesk 3ds Max*, adding a capsule and configuring it with an FPSController. By default, this controller takes its input from the keyboard. See Figure 1.



Figure 1.Unity* 3D Editor with a scene imported from Autodesk 3ds Max*

Adding Geometry

Next, I add geometry (cubes MoveForward, MoveBackward, MoveLeft, and MoveRight) to simulate a Windows GUI widget. I also add a light and camera to visualize the newly added cubes. To place this camera’s view in the bottom left of the runtime scene, I change both of the normalized view port Rect settings for elements W and H from 0 to 0.25. Also, for the camera to appear in the scene, its Depth setting must be greater than that of the main camera. The Depth setting for the main camera is −1, so the default Depth setting of 0 for the new camera will work. I make the light and cubes children of the camera by dragging these elements onto the camera in the Hierarchy panel. Next, I add the TouchScript > Layers > Camera Layer to the camera by clicking Add Component in the Inspector panel. The GUI widgets won’t function if this Camera layer step is not performed. See Figure 2.



Figure 2.Unity* 3D Editor showing a GUI widget

Adding a GUI Widget

I repeat this process to add a GUI widget to the bottom right of the screen for rotation control of the capsule with the FPSController and main camera. The scene looks like Figure 3, with both of the GUI widgets added to the scene.



Figure 3.Unity* 3D runtime with imported scene and GUI widgets

Connect the Widgets to the FPS Controller

The next step is to connect the new GUIWidget cubes to FPSController. To do so, I modify the default FPS input controller script for the capsule to use variables to instigate movement rather than input from the keyboard. See script FPSInputController.js in the accompanying Unity 3D scene.

Adding Gestures to the Widget

Next, I add TouchScript Press and Release Gestures to each move GUIWidget cube by clicking Add Component in the Inspector panel for each cube. The TouchScript menu for selecting a gesture became available when I downloaded and installed the TouchScript package.

After the TouchScript has been added, I add a custom script to the cube to receive the gesture and perform the desired action. Choosing to start with CubeMoveLeft, I add a new MoveLeft script to the cube by clicking Add Component in the Inspector panel. This script sends a Horizontal value of −1 to the FPSController global variable horizontal when the cube receives a Press Gesture. I also add code to this script to change the scale of the cube to visually confirm receipt of the gesture. See the C# script MoveLeft.cs in the accompanying Unity 3D scene.

Similarly, I add scripts to send −1 to the MoveBackward button and 1 to the MoveForward and MoveRight GUIWidget cubes. See the C# scripts Move[Backward,Forward,Right].cs in the accompanying Unity 3D scene.

Enabling Button Functionality

At this point, I can use the Move GUI widgets to navigate the scene but only individually. I can’t use the MoveForward and MoveLeft or MoveRight buttons in combination to move at a 45‑degree angle. To enable this functionality, I create an empty GameObject at the top of the hierarchy and use the Add Component to add script Touch Manager from the TouchScript menu. I also add the script Win7TouchInput from the TouchScript Input menu.

Now that the Move buttons work and I can navigate the scene by touching multiple buttons, I’m ready to implement the rotation functionality. Theses buttons won’t manipulate the FPSController directly but the rotation of the capsule holding the FPSController and the scene’s main camera. Using the OnPress and OnRelease functionality as above, the script attached to the RotateLeft GUIWidget cube rotates the FPS capsule and child main camera to the left when the cube is touched. See the script RotateLeft.cs in the accompanying Unity 3D scene.

Similarly, I add scripts to send the appropriate rotation vector to the RotateUp, RotateRight, and Rotate Down GUIWidget cubes. See the scripts in the Rotate[Backward,Forward,Right].cs in the accompanying Unity 3D scene.

The Completed Example

This completes “hooking up” the cubes being used as GUI widgets. I can now navigate the scene with touch-controlled movement and rotation multiple ways by touching and releasing multiple buttons.

I added a script to the main camera to create a video of the scene being run. This script writes a new .png file each frame. See the script ScreenCapture.cs in the accompanying Unity 3D scene.

I compiled the .png files that this script writes into a video called Unity3dTouch2.wmv using Autodesk 3ds Max and Windows Live* Movie Maker. I removed this script from the main camera upon completion of the video because it noticeably degrades the performance of the scene when active.

Video 1: Touch Script Multi Gesture Example

Common Game Controllers

Common game controllers include first person, third person, driving, flying, and overhead. Let’s look at each.

First Person

When comparing the GUI widgets implemented in the example above to the stock Unity 3D first-person controller, one of the most noticeable differences is the GUI widget example getting the camera rotation in an odd configuration. When you use two buttons for capsule rotation, it’s not immediately obvious how to get the rotation back to the original state where the camera is in alignment with the scene horizon.

The stock Unity 3D first-person controller uses a script called MouseLook to perform the functionality that the Rotate[Left,Right,Up,Down] buttons provide. The MouseLook script uses localEulerAngles to rotate the camera; it offers a better means of rotating the camera view than the capsule rotation I used in the example. To take advantage of this better means of rotation, you can implement it in a manner similar to the FPSInputController: adding public variables mouseX and mouseY to the MouseLook script. You can then use these variables to replace the Input.GetAxis(“Mouse X”) and Input.GetAxis(“Mouse Y”) functions in the script. When these variables are hooked up to the rotate buttons and incremented and decremented, respectively, the scene’s main camera will rotate in a more useful manner.

Third Person

The stock Unity 3D third-person controller can be adapted to touch in a way similar to the first-person controller. Implement Move[Left,Right,Up,Down] in the ThirdPersonController.js script after hooking it up to the touch buttons with a new script, as before. The stock Unity 3D third-person controller automatically calculates the main camera rotation and position, leaving the second GUI widget created in the example available for alternate use. One possibility is to use the top and bottom buttons to increase and decrease variable jumpHeight, respectively, and use the left and right buttons to increase and decrease variable runSpeed, respectively. Many variables are available for similar adjustment in the ThirdPersonController.js script.

Driving

In the controllers examined so far, the Move[Left,Right,Forward,Reverse] scripts stop the motion of the character when an OnRelease event is detected. For a driving-type game, the scripts would do more than send a 1 or −1 to the first- or third-person controller. The Forward and Reverse scripts would have a range of values sent to emulate throttling and braking. The first 80% of the value range may occur rapidly when holding down the button for rapid acceleration; the last 20% of the values get slowly sent to the appropriate vector for slowly attaining maximum speed on a straight road while continually holding the Forward button down. The left and right buttons would perform similarly, possibly controlling the rotation of a Unity 3D asset that uses a wheel collider. In this type of scene, the GUI widget not used for steering can be used for control over parameters such as camera distance from the vehicle, throttle and braking sensitivity, and tire friction.

Flying

To use the GUI widget interface developed in the example for a flying-type game, you would use the Move[Left,Right,Forward,Reverse] buttons similar to a joy stick or flight stick. The left and right buttons would adjust roll, and the up and down buttons would control pitch. The Rotate[Left,Right] buttons in the other GUI widget can be used to increase and decrease yaw and camera distance from the aircraft.

Overhead View

In this type of scene, the main camera orbits the scene from overhead, likely moving around the perimeter of the scene while “looking” at the center of the scene. In a script attached to a Unity 3D scene’s main camera, you could define several Vector3s at points along the perimeter of the scene. Using the Vector3.Lerp function, you can control the fraction parameter with the MoveLeft and MoveRight GUI widget buttons to move the camera between two of the perimeter points. The script can detect when a perimeter point has been reached and begin “Lerp’ing” between the next two Vector3 points. The MoveForward and MoveReverse buttons can be used to adjust the vertical component of the Vector3 points to move the orbiting camera closer to or farther away from the scene. You could employ the other GUI widget being used for Rotate[Left,Right,Up,Down] in the example for a wide variety of things, such as time-of-day control or season-of-year control.

Issues with Touch Control

The most readily observed issue in using touch control in the example above is that it blocks the view of the scene in the lower left and lower right corners. You may be able to partially remedy this problem by getting rid of the cameras that view the GUI widget buttons and make the buttons children of the scene’s main camera. The buttons would still be in the scene but would not be blocking out an entire rectangle of the scene.

You could further minimize button visibility by making them larger for more intuitive contact, and then making them disappear when touched and reappear when released. You can achieve this disappearing and reappearing by manipulating the asset’s MeshRenderer in the onPress and onRelease functions as follows:

GetComponent(MeshRenderer).enabled = false;
.
.
.
GetComponent(MeshRenderer).enabled = true;

Another challenge of using touch is ergonomics. When using touch, users can’t rest their wrists on a keyboard and may not be able to rest their elbows on a desk. When developing GUI widgets for touch, take care to place buttons in the best position possible and use of the most efficient gesture possible.

Conclusion

The TouchScript package functions well when implementing the Press and Release Gestures. The resulting Unity 3D scene performed as desired when developed and run on Windows 8, even though the TouchScript input was defined for Windows 7.

The more common gaming interfaces can be emulated with touch. Because you can implement many combinations of touch gestures, many options are available when implementing and expanding these emulations. Keeping ergonomics in mind while implementing these GUI widget interfaces will lead to a better user experience.

About the author

Lynn Thompson is an IT professional with more than 20 years of experience in business and industrial computing environments. His earliest experience is using CAD to modify and create control system drawings during a control system upgrade at a power utility. During this time, Lynn received his B.S. degree in Electrical Engineering from the University of Nebraska, Lincoln. He went on to work as a systems administrator at an IT integrator during the dot com boom. This work focused primarily on operating system, database, and application administration on a wide variety of platforms. After the dot com bust, he worked on a range of projects as an IT consultant for companies in the garment, oil and gas, and defense industries. Now, Lynn has come full circle and works as an engineer at a power utility. Lynn has since earned a Masters of Engineering degree with a concentration in Engineering Management, also from the University of Nebraska, Lincoln.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Developing Windows* 8 Desktop Touch Apps with Windows* Presentation Foundation

$
0
0

By Bruno Sonnino

Downloads


Developing Windows* 8 Desktop Touch Apps with Windows* Presentation Foundation [PDF 733KB]

The launch of the Windows 8 operating system made touch a first-class citizen, and device manufacturers started to introduce new devices with touch-enabled displays. Such devices are now becoming more common and less expensive. In addition, manufacturers have introduced a new category of devices, 2 in 1 Ultrabook™ devices, which are lightweight and have enough power to be used as a traditional notebook with a keyboard and mouse or as a tablet using touch or a stylus.

These machines open new opportunities for developers. You can enable your apps for touch and make them easier to use and more friendly. When you create Windows Store apps for Windows 8, this functionality comes free. But what about desktop apps, the bread and butter of non-Web developers? You haven’t been forgotten.

In fact, desktop apps are already touch enabled, and Windows Presentation Foundation (WPF) has had built-in support for touch since version 4.0, as you’ll see in this article.

Design for Touch Apps


Microsoft has categorized touch for desktop apps according to three scenarios: good, better, and best.

Good Apps

With Windows 8, every desktop app has built-in touch support. All touch input is translated into mouse clicks, and you don’t need to change your app for it. After you have created a desktop app, it will work with touch. Users can click buttons, select list items or text boxes with a finger or stylus, and can even use a virtual keyboard to input text if no physical keyboard is available. You can see this behavior with File Manager, the Calculator app, Microsoft Notepad, or any of your desktop apps.

Better Apps

The built-in touch behavior in Windows 8 is good and doesn’t need much effort to develop, but it’s not enough. You can go a step further by adding gesture support for your app. Gestures are one- or two-finger actions that perform some predefined action—for example, tap to select, drag to move, flick to select the next or previous item, or pinch to zoom and rotate. See Figure 1.


Figure 1. Common gestures

The operating system translates these gestures into the WM_GESTURE message. You can develop a program to handle this message and process the gestures, which will give your apps a bonus because you can support actions exclusive to touch-enabled devices.

Best Apps

At the pinnacle of Microsoft’s rating scheme, you can develop the best app for touch by designing it to support full touch functionality. Now, you may ask, “Why should I design for touch? Don’t my apps work well enough for touch?” The answer, most of the time, is no.

Apps designed for touch are different from conventional apps in several ways:

  • The finger isn’t a mouse. It does not have the precision a mouse has and so the UI requires some redesign. Buttons, check boxes, and list items should be large enough that users can touch inside them with minimal error.
  • Touch apps may not have a keyboard available. Although users can use a virtual keyboard, it’s not the same as the real thing. Rethinking the user interface (UI) to minimize keyboard input for touch apps can make the apps easier to use.
  • Many simultaneous contacts can occur at the same time. With a mouse, the program has a single input point, but with touch apps, there may be more than one input. Depending on the device, the program could accept 40 or 50 simultaneous inputs (imagine a touch table with five or six players).
  • Users can run the app in different orientations. Although traditional apps run in landscape, this is not true with touch apps. Users can rotate devices, and in some cases, there may be more than one user, such as one on either side of the device.
  • Users don’t have easy access to the whole device area. If a user is holding a tablet in his or her hands, it may be difficult to access the center of the device, because the user will have to hold it with one hand while touching it with the other one.

A “best” touch app must handle all of these issues and not abandon traditional data-entry methods with mouse and keyboard, or the app won’t work on devices that don’t have touch.

Touch Support in WPF


With WPF, you can add full touch support for your apps. You can add gestures or even full touch support with manipulations and inertia.

Adding Gestures to Your App

One way to add gestures to your apps is to process the WM_GESTURE message. The MTGestures sample in the Windows* 7 software development kit (SDK) shows how to do it. Just install the Windows 7 SDK and go to the samples directory (for the link, see the “For More Information” section at the end). Listing 1 shows the code.

Listing 1. Message processing in the MTGesture SDK sample

[PermissionSet(SecurityAction.Demand, Name = "FullTrust")]
protected override void WndProc(ref Message m)
{
    bool handled;
    handled = false;

    switch (m.Msg)
    {
        case WM_GESTURENOTIFY:
            {
                // This is the right place to define the list of gestures
                // that this application will support. By populating 
                // GESTURECONFIG structure and calling SetGestureConfig 
                // function. We can choose gestures that we want to 
                // handle in our application. In this app we decide to 
                // handle all gestures.
                GESTURECONFIG gc = new GESTURECONFIG();
                gc.dwID = 0;                // gesture ID
                gc.dwWant = GC_ALLGESTURES; // settings related to gesture
                                            // ID that are to be turned on
                gc.dwBlock = 0; // settings related to gesture ID that are
                                // to be     

                // We must p/invoke into user32 [winuser.h]
                bool bResult = SetGestureConfig(
                    Handle, // window for which configuration is specified
                    0,      // reserved, must be 0
                    1,      // count of GESTURECONFIG structures
                    ref gc, // array of GESTURECONFIG structures, dwIDs 
                            // will be processed in the order specified 
                            // and repeated occurrences will overwrite 
                            // previous ones
                    _gestureConfigSize // sizeof(GESTURECONFIG)
                );

                if (!bResult)
                {
                   throw new Exception("Error in execution of SetGestureConfig");
                }
            }
            handled = true;
            break;

        case WM_GESTURE:
            // The gesture processing code is implemented in 
            // the DecodeGesture method
            handled = DecodeGesture(ref m);
            break;

        default:
            handled = false;
            break;
    }

    // Filter message back up to parents.
    base.WndProc(ref m);

    if (handled)
    {
        // Acknowledge event if handled.
        try
        {
            m.Result = new System.IntPtr(1);
        }
        catch (Exception excep)
        {
            Debug.Print("Could not allocate result ptr");
            Debug.Print(excep.ToString()); 
        }
    }
}

You must override the window procedure, configure what kind of gestures you want when you receive the WM_GESTURENOTIFY message, and process the WM_GESTURE message.

As you can see, adding gestures to a C# app isn’t a simple task. Fortunately, there are better ways to do it in WPF. WPF has support for the stylus and raises the StylusSystemGesture event when the system detects a touch gesture. Let’s create a photo album that shows all photos in the Pictures folder and allows us to move between images by flicking to the right or left.

Create a new WPF app and add to the window three columns, two buttons, and an Image control. Listing 2 shows the code.

Listing 2. XAML markup for the new WPF app

<Grid>
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="40" />
        <ColumnDefinition Width="*" />
        <ColumnDefinition Width="40" />
    </Grid.ColumnDefinitions>
    <Button Grid.Column="0" Width="30" Height="30" Content="<" />
    <Button Grid.Column="2" Width="30" Height="30" Content=">" />
    <Image x:Name="MainImage" Grid.Column="1" />
</Grid>

Now, create a field named _filesList and another named _currentFile. See Listing 3.

Listing 3. Creating the _filesList and _currentFile fields

private List<string> _filesList;
private int _currentFile;

In the constructor of the main window, initialize FilesList with the list of files in the My Pictures folder. See Listing 4.

Listing 4. Main window constructor

public MainWindow()
{
    InitializeComponent();
    _filesList = Directory.GetFiles(Environment.GetFolderPath(
        Environment.SpecialFolder.MyPictures)).ToList();
    _currentFile = 0;
    UpdateImage();
}

UpdateImage updates the image with the current image, as shown in Listing 5.

Listing 5. Updating the image

private void UpdateImage()
{
    MainImage.Source = new BitmapImage(new Uri(_filesList[_currentFile]));
}

Then, you must create two functions to show the next and previous images. Listing 6 shows the code.

Listing 6. Functions to show the next and previous images

private void NextFile()
{
    _currentFile = _currentFile + 1 == _filesList.Count ? 0 : _currentFile + 1;
    UpdateImage();
}

private void PreviousFile()
{
    _currentFile = _currentFile == 0 ? _filesList.Count-1 : _currentFile - 1;
    UpdateImage();
}

The next step is to create the handlers for the Click event for the two buttons that call these functions.

In MainWindow.xaml, type the code in Listing 7.

Listing 7. Declaring the Click event handlers in MainWindow.xaml

<Button Grid.Column="0" Width="30" Height="30" Content="&lt;" Click="PrevClick"/>
<Button Grid.Column="2" Width="30" Height="30" Content="&gt;" Click="NextClick"/>

In MainWindow.xaml.cs, type the code in Listing 8.

Listing 8. Creating the Click event handlers in MainWindow.xaml.cs

private void PrevClick(object sender, RoutedEventArgs e)
{
    PreviousFile();
}

private void NextClick(object sender, RoutedEventArgs e)
{
    NextFile();
}

When you run the program, you will see that it shows the My Pictures images. Clicking the buttons allows you to cycle through the images. Now, you must add gesture support, which is simple. Just add the handler for the StylusSystemGesture event in the grid:

Listing 9. Declaring the StylusSystemGesture event handler in MainWindow.xaml

<Grid Background="Transparent" StylusSystemGesture="GridGesture" />

Note that I have added a background to the grid. If you don’t do that, the grid won’t receive the stylus events. The code of the handler is shown in Listing 10.

Listing 10. The grid handler

private void GridGesture(object sender, StylusSystemGestureEventArgs e)
{
    if (e.SystemGesture == SystemGesture.Drag)
        NextFile();
}

If you are following along with this article and performing the steps, you will notice that there is a SystemGesture.Flick that I didn’t use. This gesture works only in Windows Vista*. Later Windows versions show the Drag gesture. You will also notice that I am not differentiating a forward flick from a backward one (or even horizontal from vertical). That’s because there is no built-in support to do it, but we will take care of that next. Run the program and see that a flick in any direction brings up the next image.

To handle the direction of the flick, you must check its starting and end points. If the distance is larger in the horizontal direction, treat it as a horizontal flick. The sign of the difference between the end and starting points shows the direction. Declare the handler for the StylusDown event for the grid in the .xaml file, as shown in Listing 11.

Listing 11. Declaring the StylusDown event for the grid

<Grid Background="Transparent" 
      StylusSystemGesture="GridGesture"
      StylusDown="GridStylusDown">

The code for this handler is shown in Listing 12.

Listing 12. Creating the handler

private void GridStylusDown(object sender, StylusDownEventArgs e)
{
    _downPoints = e.GetStylusPoints(MainImage);
}

When the stylus is down, we store the contact points in the _downPoints array. You must modify the StylusSystemGesture event to get the direction for the flick. See Listing 13.

Listing 13. Modifying the StylusSystemGesture event

private void GridGesture(object sender, StylusSystemGestureEventArgs e)
{
    if (e.SystemGesture != SystemGesture.Drag)
        return;
    var newPoints = e.GetStylusPoints(MainImage);
    bool isReverse = false;
    if (newPoints.Count > 0 && _downPoints.Count > 0)
    {
      var distX = newPoints[0].X - _downPoints[0].X;
      var distY = newPoints[0].Y - _downPoints[0].Y;
      if (Math.Abs(distX) > Math.Abs(distY))
      {
        isReverse = distX < 0; // Horizontal
      }
      else
      {
        return;  // Vertical
      }
    }
    if (isReverse)
        PreviousFile();
    else
        NextFile();
}

When the Drag gesture is detected, the program creates the new points and verifies the largest distance to determine whether it’s horizontal or vertical. If it’s vertical, the program doesn’t do anything. If the distance is negative, then the direction is backwards. That way, the program can determine the kind of flick and its direction, going to the next or the previous image depending on the direction. The app now works for touch and the mouse.

Adding Touch Manipulation to a WPF App

Adding gestures to your app is a step in the right direction, but it’s not enough. Users may want to perform complex manipulations, use more than one or two fingers, or want a physical behavior that mimics the real world. For that, WPF offers touch manipulations. Let’s create a WPF touch app to see how it works.

In Microsoft Visual Studio*, create a new WPF app and change the window width and height to 800 and 600, respectively. Change the root component to a Canvas. You should have code similar to Listing 14 in MainWindow.xaml.

Listing 14. The new WPF app in Visual Studio

<Window x:Class="ImageTouch.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="600" Width="800">
    <Canvas x:Name=”LayoutRoot”>
        
    </Canvas>
</Window>

Go to the Solution Explorer and add an image to the project (right-click the project, click Add/Existing Item, and select an image from your disk). Add an image component to the main Canvas, assigning the Source property to the added image:

Listing 15. Image added to the main canvas

<Image x:Name="MainImage" Source="seattle.bmp" Width="400" />

If you run this program, you will see that it is already touch enabled. You can resize and move the window, and touch input is automatically converted to mouse input. However, that’s not what you want. You want to use touch to move, rotate, and resize the image.

For that, you must use the IsManipulationEnabled property. When you set this property to true, the control receives touch events. The ManipulationDelta event is fired every time a manipulation in the control occurs. You must handle it and set the new properties of the image. In the .xaml file, set the property IsManipulationEnabled to true and declare a ManipulationDelta event, as shown in Listing 16.

Listing 16. Enabling touch manipulation

<Image x:Name="MainImage" Source="seattle.bmp" Width="400" 
       IsManipulationEnabled="True" 
       ManipulationDelta="MainImageManipulationDelta">
    <Image.RenderTransform>
        <MatrixTransform />
    </Image.RenderTransform>
</Image>

I have also added a MatrixTransform to the RenderTransform property. You change this transform when the user manipulates the image. The event handler should be similar to Listing 17.

Listing 17. Adding an event handler for image manipulation

private void MainImageManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
    FrameworkElement element = sender as FrameworkElement;
    if (element != null)
    {
        var transformMatrix = element.RenderTransform
            as MatrixTransform;
        var matrix = transformMatrix.Matrix;
        matrix.Translate(e.DeltaManipulation.Translation.X,
            e.DeltaManipulation.Translation.Y);
        ((MatrixTransform)element.RenderTransform).Matrix = matrix;
        e.Handled = true;
    }
}

Initially, you get the current RenderTransform of the image, use the Translate method to move it to the new position that the manipulation gives, and then assign it as the matrix for the RenderTransform of the image. At the end, you set the Handled property to true to tell WPF that this method has handled the touch event and WPF should not pass it on to other controls. This should allow the image to move when a user touches it.

If you run the app and try to move the image, you will see that it works but not as expected—the image flickers while moving. All manipulations are calculated relative to the image, but because this image is moving, you may have recursive recalculations. To change this behavior, you must tell WPF that all delta manipulations should be relative to the main window. You do so by using the ManipulationStarting event and setting the ManipulationContainer property of the event arguments to the Canvas.

In MainWindow.xaml, enter the code in Listing 18.

Listing 18. Correcting image movement in MainWindow.xaml

<Image x:Name="MainImage" Source="seattle.bmp" Width="400" 
       IsManipulationEnabled="True" 
       ManipulationDelta="MainImageManipulationDelta"
       ManipulationStarting="MainImageManipulationStarting">

In MainWindow.xaml.cs, enter the code in Listing 19.

Listing 19. Correcting image movement in MainWindow.xaml.cs

private void MainImageManipulationStarting(object sender, ManipulationStartingEventArgs e)
{
    e.ManipulationContainer = LayoutRoot;
}

Now, when you run the app and move the image, it moves with no flicker.

Adding Scaling and Rotation

To enable resizing and rotation, you must use the Scale and Rotation properties of the DeltaManipulation. These manipulations need a fixed center point. For example, if you fix the center point at the top left of the image, elements will be scaled and rotated around this point. To get a correct translation and rotation, you must set this point to the origin of the manipulation. You can set the correct scaling and rotation in code similar to Listing 20.

Listing 20. Setting scaling and rotation

private void MainImageManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
    FrameworkElement element = sender as FrameworkElement;
    if (element != null)
    {
        var transformMatrix = element.RenderTransform
            as MatrixTransform;
        var matrix = transformMatrix.Matrix;
        matrix.Translate(e.DeltaManipulation.Translation.X,
            e.DeltaManipulation.Translation.Y);
        var centerPoint = LayoutRoot.TranslatePoint(
            e.ManipulationOrigin, element);
        centerPoint = matrix.Transform(centerPoint);
        matrix.RotateAt(e.DeltaManipulation.Rotation,
          centerPoint.X, centerPoint.Y);
        matrix.ScaleAt(e.DeltaManipulation.Scale.X, e.DeltaManipulation.Scale.Y,
          centerPoint.X, centerPoint.Y);
        ((MatrixTransform)element.RenderTransform).Matrix = matrix;
        e.Handled = true;
    }
}

Adding Inertia

When you run the app, you will see that the image moves, scales, and rotates fine, but as soon as you stop moving the image, it stops. This is not the desired behavior. You want the same behavior you have when you move an image on a smooth table. It should continue moving slower and slower until it stops completely. You can achieve this effect by using the ManipulationInertiaStarting event. In this event, you state the desired deceleration in pixels (or degrees) per millisecond squared. If you set a smaller value, it will take longer for the element to stop (like on an icy table); if you set deceleration to a larger value, the object takes less time to stop (like on a rough table). Set this value to 0.005.

In MainWindow.xaml, enter the code in Listing 21.

Listing 21. Setting deceleration in MainWindow.xaml

<Image x:Name="MainImage" Source="seattle.bmp" Width="400" 
       IsManipulationEnabled="True" 
       ManipulationDelta="MainImageManipulationDelta"
       ManipulationStarting="MainImageManipulationStarting"
       ManipulationInertiaStarting="MainImageManipulationInertiaStarting"/>

In MainWindow.xaml.cs, enter the code in Listing 22.

Listing 22. Setting deceleration in MainWindow.xaml.cs

private void MainImageManipulationInertiaStarting(object sender, 
    ManipulationInertiaStartingEventArgs e)
{
    e.RotationBehavior.DesiredDeceleration = 0.005; // degrees/ms^2 
    e.TranslationBehavior.DesiredDeceleration = 0.005; // pixels/ms^2
}

Limiting the Inertial Movement

Now, when you run the app, you will see that the manipulations seem close to the physical behavior. But if you give the object a good flick, it goes out of the window, and you have to restart the program. To limit the inertial movement, you must determine whether the delta manipulation is inertial (the user has already lifted his or her finger) and stop it if it reaches the border. You do this with the code in the ManipulationDelta event handler, shown in Listing 23.

Listing 23. Limiting inertial movement

private void MainImageManipulationDelta(object sender, ManipulationDeltaEventArgs e)
{
    FrameworkElement element = sender as FrameworkElement;
    if (element != null)
    {
        Matrix matrix = new Matrix();
        MatrixTransform transformMatrix = element.RenderTransform
            as MatrixTransform;
        if (transformMatrix != null)
        {
            matrix = transformMatrix.Matrix;
        }
        matrix.Translate(e.DeltaManipulation.Translation.X,
            e.DeltaManipulation.Translation.Y);
        var centerPoint = new Point(element.ActualWidth / 2, 
            element.ActualHeight / 2);
        matrix.RotateAt(e.DeltaManipulation.Rotation,
          centerPoint.X, centerPoint.Y);
        matrix.ScaleAt(e.DeltaManipulation.Scale.X, e.DeltaManipulation.Scale.Y,
          centerPoint.X, centerPoint.Y);
        element.RenderTransform = new MatrixTransform(matrix);

        var containerRect = new Rect(LayoutRoot.RenderSize);
        var elementRect = element.RenderTransform.TransformBounds(
                          VisualTreeHelper.GetDrawing(element).Bounds);
        if (e.IsInertial && !containerRect.Contains(elementRect))
            e.Complete();
        e.Handled = true;
    }
}

Now, determine whether the transformed image rectangle is in the container rectangle. If it isn’t and the movement is inertial, stop the manipulation. That way, the movement stops, and the image doesn’t go out of the window.

Conclusion


As you can see, adding touch manipulations to a WPF application is fairly easy. You can start with the default behavior and add gestures or full touch support with a few changes to the code. One important thing to do in any touch-enabled app is to rethink the UI so that users feel comfortable using it. You can also use different styles for the controls on a touch device, so the buttons are larger and the lists are more widely spaced only when using touch. With touch devices becoming increasingly common, optimizing your apps for touch will make them easier to use, thus pleasing your existing users and attracting new ones.

For More Information


About the Author


Bruno Sonnino is a Microsoft Most Valuable Professional (MVP) located in Brazil. He is a developer, consultant, and author having written five Delphi books, published in Portuguese by Pearson Education Brazil, and many articles for Brazilian and American magazines and websites.

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Coding for Custom Devices in Windows* 8.1 and Windows* RT 8.1

$
0
0
Coding for Custom Devices in Windows* 8.1 and Windows* RT 8.1

By Bill Sempf

Downloads


Coding for Custom Devices in Windows* 8.1 and Windows* RT 8.1 [PDF 803KB]

The Windows 8.1 operating system (as well as Windows RT 8.1) has a lot of interesting new features, not the least of which is the return of the vaunted Start button. New device support for Windows* Store apps, however, gives developers some exciting new use cases to explore.

One particularly useful feature of Windows 8.1 tablets is the USB port. Support for more sophisticated devices outside of basic driver interactions really makes you consider Windows 8.1 tablets for more line-of-business apps, like cash registers and kiosk user interaction.

New Device APIs


Application programming interfaces (APIs) for new devices focus on USB-driven hardware in Windows 8.1. Although all of them are interesting, this article focuses on two in particular: human interface devices (HIDs) and point-of-service (PoS) devices.

Human Interface Device Support

In Windows parlance, a human interface device means more than a keyboard or mouse. It’s true that HIDs started with those items, but they also included joysticks and game controllers. Now, though, HID is a protocol that can support a much larger set of devices, including those beyond the USB realm.

The HID protocol that Windows 8.1 supports is bus agnostic. What’s more, it has been generalized enough to handle a broad set of devices that humans use to communicate with computers, including various pointing devices, knobs and sliders, telephones, DVD-style controls, steering wheels, and rudder pedals. Of more interest to this article are sensor-type interfaces, such as those found on thermometers, pressure sensors, and bar code readers.

Point-of-Service Support

Point-of-service (PoS) device access is a bit more business layer and a bit less hardware layer than HID support, but there are many similarities. PoS specifically focuses on supporting the USB devices you would expect to find attached to cash registers, inventory systems, or time clocks—bar code scanners and magnetic card readers—making it unnecessary to use a camera or other built-in feature for this functionality.

Other Devices

PoS hardware isn’t the only type of devices added to Windows 8.1, however. New USB, Bluetooth*, and 3D printer device support shows up as well.

USB support isn’t new for Windows 8.1, but the idea of a custom device is. The bulk of this article is about how to write Windows Store apps for a device for which Microsoft does not provide a driver.

Windows Store apps now have access to the RFCOMM and GATT APIs to access Bluetooth devices. After a device has been paired and security checks approved, Windows RT has programmatic access to these devices. Similarly, printing isn’t new to Windows RT devices, but the addition of the IXpsDocumentPackageTarget3D interface to the Windows RT APIs means that you theoretically can build Windows Store apps to run your MakerBot—but that’s for a later article.

If you don’t happen to have a magstripe reader, 3D printer, and steering wheel for your development laptop, the JJG Technologies Super Multipurpose USB Test Tool (Super MUTT) can solve your problems. Microsoft helped design Super MUTT to help emulate such devices. This USB peripheral supports control, interrupt, bulk, isochronous, and streams data transfers.

Human Interface Devices


The HID standard was initially designed to connect mouse devices and keyboards via USB. It is bus agnostic by design, which allowed ports to other protocols, like Bluetooth and infrared. It is newly supported by Windows 8.1 in the Windows.Devices namespace.

Integrating with Hardware Partners

The goal of HID support in Windows 8.1 is for manufacturers that create interface devices to be able to bundle Windows Store apps with their devices. Essentially, certified independent hardware vendors have the option to bundle a Windows Store app with their hardware so long as it uses the standard Windows hardware driver.

The hardware, once installed, would then be able to interact with the Windows Store to download and install the relevant app when the hardware is connected to the host. This makes an optimal, noninvasive environment for additional device functionality while making best use of the built-in Windows hardware support.

HID Types

Although the HID protocol was originally designed for literal human interaction, it has been expanded to support a large number of low-latency, I/O device interfaces. Now, a broad range of devices support HID, and interested device manufacturers can contact Microsoft for details. Developers should check the device documentation to ensure support. The new Windows 8.1 APIs define several HID classes, including:

  • Mouse class driver and mapper driver
  • Game controllers
  • Keyboard and keypad class driver and mapper driver
  • Flight Mode Switch
  • System controls (Power)
  • Consumer controls (HIDServ.dll)
  • Pen device
  • Touch screen
  • Sensors
  • HID UPS battery

Enumeration and the DeviceSelector

Accessing HID objects is a two-step process. First, the app must find the Advanced Query Syntax (AQS) string related to the HID device you are manipulating. Then, the app can use the DeviceInformation.FindAllAsync() method to retrieve a collection of DeviceInformation objects.

You access the AQS string for the device via the HidDevice.GetDeviceSelector(usagePage, usageId) method. This method takes values from the special-purpose hardware ID enumeration and returns the AQS string needed to get a collection of DeviceInformation objects. See Table 1.

Table 1. Accessing the AQS string

Device TypeUsage PageUsageHardware ID
Pointer0x010x01HID_DEVICE_SYSTEM_MOUSE
Mouse0x010x02HID_DEVICE_SYSTEM_MOUSE
Joystick0x010x04HID_DEVICE_SYSTEM_GAME
Game pad0x010x05HID_DEVICE_SYSTEM_GAME
Keyboard0x010x06HID_DEVICE_SYSTEM_KEYBOARD
Keypad0x010x07HID_DEVICE_SYSTEM_KEYBOARD
System control0x010x80HID_DEVICE_SYSTEM_CONTROL
Consumer audio control0x0C0x01HID_DEVICE_SYSTEM_CONSUMER

When an AQS string is available, the DeviceInformation.FindAllAsync(aqs) method returns a collection of attached HIDs as DeviceInformation objects. The DeviceInformation class provides control and report classes. Control classes give the application control over certain defined values in the device, and report classes are how the app reads data back from the device.

Control Classes

Control classes represent properties of an HID with which apps can interact. For instance, the sensitivity of a touchpad or the lights on a keyboard would be accessed via control classes. Control classes come in two flavors: Boolean (an on/off switch) and numeric.

Numeric controls are 64-bit integers that represent a value of some parameter of the HID. A DeviceInformation object can support any number of NumericControls, which are distinguished by ID and defined by the ControlDescription property. The HidNumericControl supports both static and scaled value properties that are the only read/write properties of the class. Read-only properties include the related UsageId and UsagePage, the ID and description, respectively, which are stored in the ControlDescription property.

The HidBooleanControl is just the on/off version of the numeric control. It has largely the same accessibility and functionality, except that the Value property is Boolean. The property controls a device’s lights and power.

Report Classes

Report classes represent the values that the application needs to collect from the device. Reports provide data about available features, user input, and requests for device changes.

The host and device can communicate the supported feature through HidFeatureReport. The Data property holds relevant information about the features, including the Boolean and numeric controls that you can use to change or collect said data. The input coming from the device is found in HidInputReport. The HidInputReport class lets the app request the Data property from a defined input vector. All data is provided at once. If there are multiple input vectors, they are delineated in the IBuffer that the Data property returns.

Output reports request changes on the device. This object breaks the model a bit; the output report is “sent” to the device to request a change. After the Data property of the report has been set, it is sent using the SendOutputReport method of the HidDevice object.

Point-of-Service Devices


Whereas HIDs have an agreed-upon standard, PoS devices do not. PoS devices are those things we expect to see at cash registers, like bar code and magnetic stripe readers (although some magstripe readers do adhere to the HID standard).

Microsoft seems to be aiming squarely at making Windows 8.1 a viable target for PoS devices. Picture a retail store in which all the staff carry a Windows 8.1 tablet, ready to scan an item and take your card. With a unified standard for PoS devices, this vision could be a reality.

Microsoft Office integration in Windows is a big deal, with the advanced back-office offerings that Microsoft provides. Integrating Windows 8.1 tablet hardware with the back-office software was a priority, and hardware integration was part of that. Microsoft has chosen to focus on the two main units: bar code scanners for inventory and magnetic stripe readers for handling credit cards.

Bar Code Scanners

Bar code scanners are handled much like an HID device, with a class representing a device. The BarcodeScanner class fires a StatusUpdated event that alerts the host that input has been received. The DataReceived event has the data from the status call. From there, the app can decode the input and interact with the underlying retail system.

To prevent conflicts with the underlying operating system, which might have low-level access to the incoming data, Windows RT has a facility for claiming a bar code scanner. The app can populate a BarcodeScanner object with the first available scanner by calling the static BarcodeScanner.getDefaultAsync() method. With an instance to the BarcodeScanner object, the app can call claimScannerAsync() to get exclusive access to that scanner. After the scanner has been claimed, subscribing to the datareceived event gains access to the incoming data.

After data has been received, the BarcodeStatusReport class has the data from the scan. Like the HID standard, the PoS protocol uses the “report” model to pass data back to the app. The report has these properties:

  • ScanData : . The raw data coming back from the hardware
  • ScanDataLabel : . The decoded label, which is just the useful data
  • ScanDataType : . As expected, this is the label type as defined in the BarcodeSymbologiesclass

Bar codes come in a number of standard symbologies, all based on International Organization for Standardization (ISO) standards. The Windows RT classes handle several different symbologies, all of which can be found in the MSDN* library.

Magnetic Stripe Readers

Not surprisingly, the MagneticStripeReader class works largely the same as the BarcodeReader. The GetDefaultAsync class gets the first available MagneticStripeReader and populates the object. The ClaimReaderAsync method locks the selected reader for the Windows Store app and returns a ClainedMagneticStripeReader.

The claimed magnetic stripe reader has a set of events to tell the app what kind of card is swiped. The events include:

  • AamvaCardDataReceived : A motor vehicle card was swiped.
  • BankCardDataReceived :  A bank card was swiped.
  • VendorSpecificDataReceived : Some unrecognized card was swiped.
  • ErrorOccurred : An error occurred with the card.
  • ReleaseDeviceRequested : The device received a request to release an exclusive claim.

After data has been received, the magnetic stripe reader API provides strongly typed objects for the data of American Association of Motor Vehicle Administrators (AAMVA) and bank card data. AAMVA cards contain:

  • Address
  • FirstName
  • Restrictions
  • BirthDate
  • Gender
  • State
  • City
  • HairColor
  • Suffix
  • Class
  • Height
  • Surname
  • Endorsements
  • LicenseNumber
  • Weight
  • ExpirationDate
  • PostalCode
  • EyeColor
  • Report

Bank cards contain:

  • AccountNumber
  • ExpirationDate
  • FirstName
  • MiddleInitial
  • Report
  • ServiceCode
  • Suffix
  • Surname
  • Title

The vendor-specific event handler returns the raw data—probably a placeholder for further expansion.

One benefit of magnetic cards over optical codes is encryption. Standardized algorithms are designed into the protocols to ensure broad coverage among consuming applications. The Devices API handles the Triple DES Derived Unique Key Per Transaction out of the box (the most common bank card algorithm) and has a spot for vendor-specific algorithms.

Working with the Super MUTT


To test code that would be using an HID or PoS device, the best solution is a Super MUTT. This simple device (shown in Figure 1) can pretend to be an HID or PoS device to prototype code.


Figure 1. The Super MUTT device

Getting started with the Super MUTT device is somewhat involved. Here’s the basic process:

  1. Download the MUTT software package from MSDN.
  2. Install the emulators by running install.cmd for your processor type.
  3. Run these two commands to set up the Super MUTT to run with Windows Store apps:

Muttutil.exe –forceupdatefirmware
Muttutil.exe –setwinrthid

With that done, download the HID sample from MSDN. The sample has a lot of overhead, but the code sample in Listing 1 gives you a more straightforward look at the core HID functionality, specifically enumerating devices, then configuring and populating a numeric report.

Listing 1. Coding for core Super MUTT functionality

using System;
using System.Linq;
using System.Runtime.InteropServices.WindowsRuntime;
using Windows.ApplicationModel.Activation;
using Windows.Devices.Enumeration;
using Windows.Devices.HumanInterfaceDevice;
using Windows.Storage;
using Windows.UI.Xaml;

namespace UsingTheSuperMutt
{
    sealed partial class App : Application
    {
        public App()
        {
            this.InitializeComponent();
        }
        protected override void OnLaunched(LaunchActivatedEventArgs e)
        {
            Window.Current.Activate();
        }
        private async void UseTheMutt()
        {
            //Initialize the device selector with the usage page and ID, the vendor and the  product ID
            string selector = HidDevice.GetDeviceSelector(0xFF00, 0x0001, 0x045E, 0x078F);

            // Enumerate devices using the selector
            var devices = await DeviceInformation.FindAllAsync(selector);

            if (devices.Count > 0)
            {
                // Open the supermutt
                HidDevice device = await HidDevice.FromIdAsync(devices.ElementAt(0).Id,
                                   FileAccessMode.ReadWrite);
                //Let's make the light blink
                //Get the report from the SuperMutt that represents the LED
                var featureReport = device.CreateFeatureReport(0x0200);

                // Only grab the byte we need
                Byte[] bytesToModify = new Byte[1];

                //Copy the report
                WindowsRuntimeBufferExtensions.CopyTo(featureReport.Data, 0x0000, bytesToModify, 0, bytesToModify.Length);

                //Set the blink rate
                bytesToModify[0] = 0x0004;

                //Move the edited report
                WindowsRuntimeBufferExtensions.CopyTo(bytesToModify, 0, featureReport.Data, 0x0100, bytesToModify.Length);

                //Send it to the SuperMutt
                await device.SendFeatureReportAsync(featureReport);

            }
        }
    }
}

After running this code, the light on your Super MUTT should start blinking. You can play with the bytesToModify[0] parameter to set the blink rate and see how the report would get to the device and persist in the memory. Obviously, each device’s report locations differ by manufacturer, but this base code will get a project started. You just need to change the address locations based on the device documentation.

Conclusion


The additions to the Windows.Devices namespace in Windows RT shows Microsoft’s commitment to new interface devices working with Windows Store apps. Microsoft is clearly focused on supporting hardware vendors and developers who are including new hardware in the Windows 8 ecosystem.

Developers of new hardware have the ability to bundle Windows Store apps with new devices, making configuration, training, and even use of devices more integrated with desktop apps. The integrated support for many HID-supported devices will make app development easier and make using the built-in Windows drivers more appealing to device developers.

Developers of retail point-of-sale software will be pleased with the integration of magstripe and bar code readers into Windows RT. These new API classes make the creation of simple PoS systems much easier, along with the new deployment model and innovative user interface.

Resources


About the Author

Bill Sempf is a software security architect. His experience includes business and technical analysis, software design, development, testing, server management and maintenance, and security. In his 20 years of professional experience, he has participated in the creation of more than 200 applications for large and small companies, managed the software infrastructure of two Internet service providers, coded complex software in every environment imaginable, tested the security of all types of applications and APIs, and made mainframes talk to cell phones. Bill is the author of C# 5 All in One for Dummies and Windows 8 Programming with HTML5 For Dummies; coauthor of Effective Visual Studio.NET, a frequent contributor to industry magazines, and has spoken at BlackHat, CodeMash, DerbyCon, BSides, and DevEssentials.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Fast ISPC Texture Compressor

$
0
0

This article and the attached sample code prooject were written by Marc Fauconneau Dufresne at Intel Corp.

This sample demonstrates a state of the art BC7 (DX11) Texture compressor. BC7 partitioning decisions are narrowed down in multiple stages. Final candidates are optimized using iterative endpoint refinement. All BC7 modes are supported. SIMD instruction sets are exploited using the Intel SPMD Compiler. Various quality/performance trade-offs are offered.


PERCEPTUAL COMPUTING: Perceptual 3D Editing

$
0
0

Downloads


PERCEPTUAL COMPUTING: Perceptual 3D Editing [PDF 839KB]

By Lee Bamber

1. Introduction


If you’re familiar with Perceptual Computing and some of its applications, you will no doubt be wondering to what degree the technology can be used to create and manipulate the 3D world. Ever since the first batch of 3D games, modelling and motion capture have been parts of our app-making toolkit, and a wide variety of software and hardware has sprung up to answer the call for more immersive and life-like crafting.

When looking at Perceptual Computing as the next natural technology to provide new solutions in this space, you might be tempted to think we can finally throw away our mice and keyboards and rely entirely on our real-world hands to create 3D content. When you start down this road, you begin to find it both a blessing and a curse, and this article will help you find your way with the help of one programmer’s map and a few sign posts.


Figure 1. A simple 3D scene. The question is, can this be created with Perceptual Computing?

Readers should have a basic understanding of Perceptual Computing as a concept and a familiarity with the hardware and software mechanisms required to control an application. No knowledge of programming or specific development platforms is required, only an interest in one possible evolutionary trajectory for 3D content creation.

2. Why Is This Important


It can be safety assumed that one of the benefits of increasingly more powerful devices will be the adoption of 3D as a preferred method of visual representation. The real-world is presented to us in glorious 3D, and remains our preferred dimension to interact and observe. It is fair to conclude that the demand for 3D content and the tools that create it will continue to increase, moving far beyond the modest needs of the games industry and become a global hunger.

The current methods for creating 3D content and scenes are sufficient for the present, but what happens when five billion users want to experience new 3D content on a daily basis? A good 3D artist is expensive and hard to find, and good 3D content takes a long time to create! What if there was another way to fulfil this need?

3. The Types of 3D Content


If you are familiar with 3D game creation, you will be aware of several types of 3D content that go into a successful title. The terrain and structures that make up the location, the characters that play their roles, and the objects that populate your world and make everything a little more believable. You also have 3D panels and ‘heads-up-displays’ to feed information to the player, and a variety of 3D special effects to tantalize the watcher. How would we accomplish the creation of these different types using no mouse, no keyboard, no controller, or sculpting hardware? What might this creative process look like with Perceptual Computing?

4. Editing Entire Worlds


The terrain in a scene is often stretched out over an extremely large area, and either requires a team of designers to construct or a procedural function to randomize the world. When no specific location detail is required, you could use Perceptual Computing Voice Recognition to create your desired scene in a matter of seconds.

Just imagine launching your new hands-free 3D editing tool by saying “New Scene.”


Figure 2. The software immediately selects a brand new world for you to edit

You then decide you want some vegetation, so you bring them forth as though by magic with the words “Add Trees.”


Figure 3. With a second command, you have added trees

You want your scene to be set at midnight, so you say “Set Time To Midnight.”


Figure 4. Transform the scene completely by using a night setting

Finally to make your creation complete, you say “More Hills” and the tool instantly responds by adding a liberal sprinkling of hills into your scene.


Figure 5. Making the terrain more interesting with a few extra hills.

The user has effectively created an entire forest world, lumpy and dark, in just a few seconds. You can perhaps see the possibilities for increased productivity here, but you also see that we have removed the need for any special 3D skills as well. Anyone can now create their own 3D landscapes; all they need is a voice and a few common phrases. If at any time they get confused, they can say “Help “and a full selection of command words displays.

5. Editing 3D in Detail


The world editing example is nothing remarkable, or exclusively the domain of Perceptual Computing, but suggestive of the types of interfaces that can be created when you think out of the box. The real challenge comes when you want to edit specific details, and this is where Perceptual Computing takes center stage.

Now imagine during the general world editing you wanted to create something specific, let’s say a particularly gnarled tree, the “Add Tree” command would be too generalized and random. So just as you would in real-life, you point at the screen and then say “Add Tree There.”


Figure 6. As the user points, the landscape highlights to indicate where you’re pointing

Unfortunately the engine assumed you wanted the tree in context and selected the same tree as the previous additions. It is fortunate then that our revolutionary new tool understands various kinds of context, whether it be selection context or location context. By saying “Change Tree to Gnarled,” the tree instantly transforms into a more appropriate visual.


Figure 7. Our scene now has specific content created exactly where the user wanted it

As you increase the vocabulary of the tool, your user is able to add, change, and remove an increasing number of objects, whether they are specific objects or more general world properties. You can imagine the enormous fun you can have making things pop in and out of existence, or transforming your entire world with a single word.

For locomotion around your world, exactly the same interface is used but with additional commands. You could point to the top of a hill or distant mountain and say “Go There.” Camera rotation could be tackled with a gestured phrase “Look At That,” and when you want to save your position for later editing, use commands such as “Remember This Location” and “Return To Last Location.”

6. The Trouble with 3D Editing


No article would be complete without an impartial analysis of the disadvantages to this type of interface, and the consequences for your application.

One clear advantage a mouse will have over a Perceptual coordinate is that the mouse pointer can set and hold a specific coordinate for seconds and minutes at a time without flinching. You could even go and make a cup of tea, and be very confident your pointer will be at the same coordinate when you return. A Perceptual coordinate however, perhaps provided by a pointing finger at the screen, will rarely be able to maintain a fixed coordinate for a fraction of a second, and the longer the user attempts to maintain a fixed point the more annoyed they will get.

A keyboard can instantly communicate one of 256 different states in the time it takes to look and press. To get the Perceptual Camera to identify one of 256 distinct and correct signals in the same amount of time would be ambitious at best.

Given these comparisons, it should be stated that even though you can increase productivity tenfold on the creation of entire worlds, the same level of production could decrease dramatically if you tried to draw some graffiti onto the side of a wall or building. If you could ever summon a laser to shoot out of your finger, or gain the power of eye lasers, you would quickly discover just how difficult it is to create even a single straight line.

The lesson here is that the underlying mechanism of the creative process should be considered entirely. We can draw a straight line with the mouse, touchpad, and pen because we’re practised at it. We are not practised at doing it with a finger in mid-air. The solution would be to pre-create the straight line ahead of time and have the finger simply apply the context so the software knows where to place the line. We don’t want to create a “finger pointer.” We want to place a straight line on the wall, so we change the fundamental mechanism to suit our Perceptual approach, and then it works just fine.

7. Other types of 3D Editing


The same principals can be applied to the creation of structures, characters, creatures, inanimate objects, and pretty much anything else you can imagine for your 3D scene. A combination of context, pointing, and voice control can achieve an incredible range of creative outcomes.

Characters - Just as you design your avatars in popular gaming consoles or your favourite RPG, why not have the camera scan you to get a starting point for creating the character. Hair color, head size, facial features, and skin color can all be read instantly and converted into attributes in the character creation process. Quickly identifying which part of the body you want to work on and then rolling through a selection would be more like shopping that creating, and much more enjoyable.

Story Animation – Instead of hiring an expensive motion capture firm, why not record your own voice-over scripts in front of the Perceptual Camera. It would read not only your voice, but also track your upper body skeleton and imprint those motions onto the character you intend to apply the speech. Your characters will now sound and animate as real as the very best AAA productions!

Structures – The combination of a relatively small number of attributes can produce millions of building designs and all done in a few seconds. Take these two examples and the buildings created from two series of commands: “Five storeys. Set To Brick. Five Windows. [point] Add Door. [point] Remove Window” and “One storey. [point] Add Window. Go To Back. Add Three Doors. Set To Wood.” Naturally the tool would have to construct the geometry and make smart decisions about the interconnectivity of elements but the types of element are not inexhaustible.

8. Tricks and Tips


Do’s

  • Make it a habit to continually compare your Perceptual solution with the traditional methods. If it’s more difficult, or less satisfying, should it really be used?
  • Try your new interface models on new users periodically. If your goal is a more accessible editing system, you should be looking for users without traditional creativity skills.
  • Remember that when using voice recognition, individual accents and native languages will play a huge role in how your final software is received. Traditional software development will not have prepared you for the level of testing required in this area.
  • Experiment with additional technologies that complement the notion of hands-free intelligent interfaces. Look at virtual reality, augmented reality, and additional sensors.

Don’ts

  • Do not create interfaces that require the user to hold their arm forward for prolonged periods of time. It’s uncomfortable for the user, and very fatiguing long-term.
  • Do not eliminate the keyboard, mouse, or controller from your consideration when developing new Perceptual solutions. You might find mouse- and voice-control is right for your project, or keyboard and “context pointing” in another.
  • Do not assume project lengths can be determined when delving into this type of experimental development. You will be working with early technology in brand new territories so your deliverables should not be set in stone.

9. Final Thoughts


As technology enthusiasts, we wait for the days of Holodeck’s and chatting with your computer as a member of the family. It may surprise you to learn we’re already on that road and these magical destinations are a lot closer than you might think. Voice recognition is now usable for everyday applications; the computer can detect what we are looking at to gain context, and we have the processing power to produce software systems expert enough to fill in the gaps when required.

All we need is a few brave developers stubborn enough to reject yesterday’s solutions and become pioneers in search of new ones. Hopefully, I have painted an attractive picture of how creativity can be expressed without the need for traditional hardware, and highlighted the fact that the technology exists right now. This is not just a better tool or quicker process, but a wholesale transformation of how creativity can be expressed on the computer. By removing all barriers to entry, and eliminating the need for technical proficiencies, Perceptual Computing has the power to democratise creativity on a scale never before seen in our industry.

About The Author


When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Dual-Camera 360 Panorama Application

$
0
0

Download source code

Download paper as PDF

Introduction

Taking panoramic pictures has become a common scenario and is included in most smartphones’ and tablets’ native camera applications. Instagram alone has close to 1 million pictures tagged as panoramas, and flickr.com has over 1.2 million uploads tagged as panoramas. Traditionally the user will pan the device using a single camera to acquire images and the application will stitch the images together to create the panorama. Since most devices have both front and rear facing cameras, we could potentially utilize both cameras simultaneously to quickly capture a large panorama.

Current panorama applications on the market only support a ~180 to 270 degree maximum rotation, but using two cameras allows us to capture a full 360 degrees with only a 180 degree rotation of the device. There is a lot of value in decreasing the device rotation because it is very difficult to keep a phone or tablet steady when trying to rotate a large amount. Rotating only 180 degrees allows users to complete acquisition much faster by rotating the device in their hands without the need for a full body rotation. This will enable a more consistent and easy experience for the user.

First, I will go through a general overview of the implementation, then talk about challenges and our results. Please note: All sample software in this document is provided under the Intel Sample Software License. See Appendix A for details.

Implementation

In this section, we will discuss the steps necessary to capture images using both cameras and stitch them together to make a complete panorama picture. For reference, I will include code samples in C++ that utilize Microsoft DirectShow* APIs, but you can choose to develop with other APIs such as Microsoft Media Foundation.

First, we need to initialize the cameras and sensors. The method for doing so will depend on the APIs available for your target platform.


	const int VIDEO_DEVICE_0 = 0; // zero based index of video capture device to use
	const int VIDEO_DEVICE_1 = 1;

	Capture frontCam = new capture(VIDEO_DEVICE_0);
	Capture rearCam = new capture(VIDEO_DEVICE_1);

	Gyrometer _gyrometer = new gyrometer();
	Compass _compass = new compass();

 

Here we should also specify capture resolution. The images acquired from each camera should be the same resolution. You may also want to control other things, like exposure or autofocus.

Now we will create a function to capture and save images:


void acquireImages(int imageNumber)
{
	//save raw captures in memory
   	_frontImage = frontCam.Click();        
	_rearImage = rearCam.Click();

	//turn raw image into a readable format
	Bitmap front = new Bitmap(_frontImage);
	Bitmap rear = new Bitmap(_rearImage);

	//You may need to rotate the images based on your platform
	front.RotateFlip(RotateFlipType.RotateNoneFlipY);
	rear.RotateFlip(RotateFlipType.RotateNoneFlipY);

	//Save images to working directory
	front.Save("images/" + imageNumber + "_front.jpeg");               
	rear.Save("images/" + imageNumber + "_rear.jpeg");
}

 

Next we create a function to acquire images. We tested multiple ways to implement acquisition: timer-based, gyro-based, and compass-based. Different platforms have different sensors available, which may determine what method you can use. In these samples I use NUM_IMAGES to denote the number of images we take with each camera. The number of images varies depending on the field of view of the platform’s cameras. If you have too few images, the images won’t have enough overlap and won’t stitch together well. If you have too many images, you will have duplicates and processing time will be much higher than it needs to be. It takes experimentation to determine the ideal number of images you need to take.

Using a timer is a simple and reliable way to control capture and does not require any special sensors. It is, however, restricting to users since they must follow precise timing intervals for image capture.


for (int currentImage = 0; currentImage < NUM_IMAGES; currentImage++ )
	{ 
		acquireImages(currentImage);

		//specify interval between captures
		Thread.Sleep(750);  
	}

 

Using a gyroscope is another potential method; however, it can produce inconsistent results. It does allow the user to have control of the speed at which they capture. The gyroscope reports the angular frequency of the device. Since we want to know the angular position of the device, we can use this formula to get angular position:

new angular position = angular position + (angular velocity * sampling interval)

The angular position becomes more accurate as the sampling interval gets smaller. Unfortunately we can’t sample at a high enough rate to maintain a perfectly accurate angular position, so acquisition intervals can be inconsistent when rotating at different speeds. This leads to inconsistent amounts of overlap on our captured images, which may cause stitching to fail.

Gyro-Based Acquisition:


position = 0;
while (currentImage < NUM_IMAGES)
{
	position += _gyrometer.GetCurrentReading() * GYRO_SAMPLE_INTERVAL;

	//capture image when position is at desired position +/- error
	if(position > ((currentImage*angleBetweenImages)-angleErrorTolerance) &&
		position < ((currentImage*angleBetweenImages)+angleErrorTolerance))
	{
		acquireImages(currentImage);
		currentImage++;
	}
	thread.sleep(GYRO_SAMPLE_INTERVAL); 
}

We found the best method to use is the compass sensor. This method can capture images at very accurate intervals, which means our images will overlap the optimal amount every time. The compass is not available on all devices, however.

Compass-Based Acquisition:


while (currentImage < NUM_IMAGES)
{
	if (currentImage == 0)
	{
		//initialize position at first image capture
		startPos = _compass.GetCurrentReading().HeadingMagneticNorth;
	}

	position = startPos - _compass.GetCurrentReading().HeadingMagneticNorth;

	if (position < 0)
	{
		//forces value to be betweeen 0 and 360
		position = 360 + position;
	}

	//capture image when position is at desired position +/- error
	if(position > ((currentImage*angleBetweenImages)-angleErrorTolerance) &&
		position < ((currentImage*angleBetweenImages)+angleErrorTolerance))
	{
			acquireImages(currentImage);
			currentImage++;
	}
}

After we have captured our images, we can stitch the images together. Because panorama stitching is a complex topic in itself, I can only give a high level overview. We used the stitching library in an open source project called OpenCV to do processing.

We will store our images in the “images” folder in our working directory. We can now load the images into our application and call the stitching function.

	string[] imgs = Directory.GetFiles("images");
	stitch(imgs, result);
	result.save(“images/result.jpg”);

Assuming image stitching was successful, we now have our result panorama.

Challenges

Several issues arise when we tried to implement this idea on different platforms. We ran into issues with camera angle and unsupported camera features. There are workarounds for these issues, but there are some platforms that do not support simultaneous use of both cameras, which makes the application impossible to implement.

On some platforms the manufacturer decided to mount the front camera facing at an upward angle and the rear camera facing a downward angle with the intention of the cameras being used when the tablet is at an angle, similar to a laptop. This mismatch in angles between the front and rear cameras will make it so the acquired images do not overlap and cannot produce a quality panorama when used in landscape mode. The best workaround for this issue is to use the device in portrait orientation, which will make the vertical fields of view equal, which eliminates the issue.

This is an uncropped example of a landscape capture. You can see there is slight mismatch in the camera angles and leaves much of the image unusable after cropping.

Because of the mismatch, we must waste most of the image height during cropping. The original pictures were 1080px tall and the final panorama was 705px tall. We lost 35% of the vertical pixels.

This is an uncropped example of a portrait capture. You can see the images match up well.

Since the images match up well, we don’t have to waste much of the height. The original images were 1920px tall and the result is 1640px tall. We only lost 14% of the height.

Even on the same platform, the front and back cameras often have different focal length, sensor size, and available capture modes. The maximum resolution of the resulting panorama is limited by the resolution of the smaller of the two cameras. The different focal lengths can create stitching issues if the difference is too large and also determine how many images need to be taken. A camera with a very wide field of view will be able to take fewer pictures and require less rotation than one with a narrow field of view. On different cameras, the manufacturer will allow different usage modes to be supported like “preview” for fast and low quality streaming, and “capture” for high quality but slow acquisition speed. We found that these features are not available on all platforms, so the application must be tested and modified for each target platform.

Conclusion

Utilizing dual cameras to capture large panoramas is a valuable and worthwhile concept that works well provided the correct hardware and driver support. On a platform with cameras directed perpendicular to the device and a compass sensor available, the application will work with little modification. On a platform with offset cameras and/or missing sensors, it may require additional work to get the application working. Due to the huge amount of variation between platforms, it is difficult to create a one-size-fits-all application, so we must develop and test the application on each target platform. Despite some potential implementation challenges, the concept improves greatly on current panorama capture applications and enables easier and faster image capture and better user experience.

Resources

Microsoft DirectShow API (Camera Interfacing/Streaming)
Microsoft Sensor API (Sensors)
OpenCV (Panorama Stitching)

 

Intel, the Intel logo, Atom, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Retargeting a BayTrail* Windows* 8 Store Sample Application to Windows 8.1

$
0
0

Download Article

Download Retargeting a BayTrail* Windows* 8 Store Sample Application to Windows 8.1 [PDF 602KB]

Abstract

This article discusses the process of retargeting an existing healthcare sample app from Windows 8 to Windows 8.1. In particular, using special features that were added to Visual Studio 2013 for retargeting/importing windows store apps, build errors, and challenges faced. In addition, using 3rd party libraries and newly available UI controls & features are discussed.

Overview

Overview

Windows 8.1 brings new features, APIs, UI, and performance enhancements to the windows platform. It is up to the developer to take advantage of new features and re-implement some parts of an app that will benefit from the new Windows 8.1 enhancements. Even a simple retarget and compile can result in benefits such as quick app startup time and automatic windows store app updates.

This article will look at migrating a sample healthcare app to Windows 8.1.

For an in-depth and general technical discussion of migrating a Windows Store App to Windows 8.1, please refer to the following white paper.

http://msdn.microsoft.com/en-us/library/windows/apps/dn376326.aspx

Retargeting Windows Store Apps to Windows 8.1

Depending on the functionality and complexity of your app, the process of retargeting Windows Store Apps to Windows 8.1 is relatively a straight forward process.

Developers can plan their retargeting process to be incremental. Initially, the app can be recompiled to resolve any build errors so the app takes advantage of the Windows 8.1 platform. Subsequently, developers can review any functionalities in the app that will benefit from re-implementation using the newly available APIs. Finally, the retargeting process gives the developer an opportunity to review the compatibility of 3rd party libraries in Windows 8.1.

When the healthcare sample app was migrated to Windows 8.1, the simple recompile option, checking usage of 3rd party libraries, and re-implementing the settings control using the new Windows 8.1 XAML control was performed.

For reference, Microsoft Developer Network has extensive documentation that covers all facets of migrating an app to Windows 8.1. Please refer the following link.

http://msdn.microsoft.com/en-us/library/windows/apps/dn263114.aspx

A Healthcare Windows Store App

As seen in several other articles in this forum, we will use a sample healthcare Line of Business Windows Store app.

Some of the previous articles include:

The application allows the user to login to the system, view the list of patients (Figure 1), access patient medical records, profiles, doctor’s notes, lab test results, and vital graphs.


Figure 1: The “Patients” page of the Healthcare Line of Business app provides a list of all patients. Selecting an individual patient provides access to the patient’s medical records.

Retargeting sample healthcare app

Before the sample app is retargeted, it is useful to review different components, UI features, and 3rd party libraries that are used.

The UI and the core app life cycle handling of the app were implemented using the templates available in Windows 8. Windows 8.1updated the project & page templates and included a brand new Hub pattern template. The app uses sqlite* as its backend database to store all patient records. WinRTXamlToolkit* is used for charts and the 3rd party library Callisto* is used for implementing the Settings Control. The Settings Control is invoked by the charms bar. Windows 8.1 has a new XAML based settings controls that can be used instead of a 3rd party library.

The app has search functionality implemented using the charms bar integrated in Windows 8. Windows 8.1 has a new in-app search UI control that could be used to extend the search experience to different UI pages of the app, depending on the requirements.

The sample app has several other functionalities like camera usage, NFC and audio recording that will continue to function in Windows 8.1 without any changes.

As mentioned earlier, Visual Studio 2013 was used to recompile the sample app for Windows 8.1, 3rd party library build issues were fixed, and parts of the app were re-implemented using new Windows 8.1 features. The app can be refined further in the future by re-implementing more pieces of the app that benefit from the Windows 8.1 platform. For example, new page templates, view models for different screen sizes, the new in-app search, or the new tile sizes and templates could be utilized.

Using Visual Studio 2013 to import the app

To retarget the app for windows 8.1, first download and install Visual Studio 2013 on a Windows 8.1 host machine. After the installation, ensure any 3rd party libraries are updated to the latest version inside the Visual studio extensions.

The project was opened in Visual Studio 2013 and no errors were seen when compiling the project out of the box. To retarget the project to Windows 8.1, right-click the project name in the solutions explorer and the option for retargeting the project to Windows 8.1 is shown in the list (Figure 2)


Figure 2: Option for retargeting the project (captured from Visual Studio* 2013)

Clicking on this option brings up a dialog box asking for confirmation. Verify that the project selected is correct and press the OK button.


Figure 3: Confirmation Dialog for retargeting (captured from Visual Studio 2013*)

After Visual Studio completes the action, you should see the project now has “Windows 8.1” next to the project name in the solution explorer (Figure 4).


Figure 4: Solution Explorer shows the project is retargeted to Windows 8.1 (captured from Visual Studio 2013*)

When trying to compile the project, build errors may occur. Figure 4 also shows some 3rd party libraries showing build issues. In the next section, resolving build errors and 3rd party library issues is discussed.

Fixing build errors and 3rd party library issues

Updating the 3rd party libraries to latest version resolves some of the problems. The Visual Studio* extensions dialog can be used to check for the latest library version available. For Sqlite*, it was updated to the Windows 8.1 version as shown Figure 5.


Figure 5: Extensions dialog (captured from Visual Studio 2013*)

The usage of some of the 3rd party libraries in the app were re-evaluated after migrating to Windows 8.1. As mentioned earlier, the Settings UI control has been added to Windows 8.1. It was decided that it would be best to remove the 3rd party library Callisto* from the app and utilize the native Windows 8.1 control. To migrate to the native control, all source code references to the Callisto* library were removed. Please see Figure 6 to see the updated project references in the solutions explorer.


Figure 6: Project references in sample app after retargeting to Windows 8.1* (captured from Visual Studio 2013*)

WinRTXamlToolkit* is still being utilized for charts and other features, so it has been updated to the Windows 8.1 version.

By using the newly available Windows 8.1 XAML settings control, the app maintains the same look and feel as when using the 3rd party library. Figure 7 shows the settings control in design mode.


Figure 7: Settings Flyout UI in design mode (captured from Visual Studio 2013*)

Using the new XAML based settings control, SettingsFlyout, is similar to other XAML controls. The following snippet shows the XAML code used for the sample app’s settings UI.

<SettingsFlyout
    x:Class="PRApp.Views.PRSettingsFlyout"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:local="using:PRApp.Views"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
    xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
    mc:Ignorable="d"
    IconSource="Assets/SmallLogo.png"
    Title="PRApp Options"
    HeaderBackground="{StaticResource StandardBackgroundBrush}"
    d:DesignWidth="346"
    xmlns:c="using:PRApp.ViewModels">

    <SettingsFlyout.Resources>
        <c:SessionSettingsViewModel x:Key="myDataSource"/>
    </SettingsFlyout.Resources>
    <!-- This StackPanel acts as a root panel for vertical layout of the content sections -->
    <StackPanel VerticalAlignment="Stretch" HorizontalAlignment="Stretch">


        <StackPanel Orientation="Horizontal"  Margin="5">
            <TextBlock Text="{Binding Source={StaticResource myDataSource}, Path=SessionSettings.Loginuser.Login}" Margin="0,0,5,0" />
            <TextBlock Text="{Binding Source={StaticResource myDataSource}, Path=SessionSettings.Loginuser.Loginmsg}" />
        </StackPanel>
        <Button Content="User Home" Margin="5" Command="{Binding Source={StaticResource prAppUtil}, Path=UserHomeCmd}"/>
        <Button Content="Logout" Margin="5" Click="Button_Click_1" />
        <ToggleSwitch Header="Show Deceased Patients" Margin="5" IsOn="{Binding Mode=TwoWay, Source={StaticResource myDataSource}, Path=SessionSettings.ShowDeceased}"/>
        <StackPanel Orientation="Horizontal"  Margin="5">
            <ToggleSwitch Header="Use Cloud Service" Margin="5,0,0,0" IsOn="{Binding SessionSettings.UseCloudService, Mode=TwoWay, Source={StaticResource myDataSource}}"/>
        </StackPanel>
        <StackPanel Orientation="Vertical"  Margin="5">
            <TextBlock Margin="5" FontSize="14" Text="Server Address:" Width="97" HorizontalAlignment="Left" VerticalAlignment="Center" />
            <TextBox HorizontalAlignment="Stretch" FontSize="12" Margin="5" Text="{Binding SessionSettings.ServerUrl, Mode=TwoWay, Source={StaticResource myDataSource}}"  />
        </StackPanel>


        <Button HorizontalAlignment="Right"  Click="Button_Click_Test_Connection" Content="Test Connection"/>
        <TextBlock  TextWrapping="Wrap"  x:Name="StatusText" HorizontalAlignment="Left" Text="{Binding SessionSettings.TestConnectionStatus, Source={StaticResource myDataSource}}"  />

    </StackPanel>
</SettingsFlyout>

Figure 8: XAML code snippet for settings flyout in sample app

Configuring and initializing the SettingsFlyout is done in the app’s main point of entry (App.xaml.cs file). SettingsFlyout is added to the ApplicationCommands collection. Please see the following code snippet in Figure 9 for reference.

protected override void OnWindowCreated(WindowCreatedEventArgs args)
{
    Windows.UI.ApplicationSettings.SettingsPane.GetForCurrentView().CommandsRequested += Settings_CommandsRequested;
}

void Settings_CommandsRequested(Windows.UI.ApplicationSettings.SettingsPane sender, Windows.UI.ApplicationSettings.SettingsPaneCommandsRequestedEventArgs args)
{
    Windows.UI.ApplicationSettings.SettingsCommand PRSettingsCmd =
        new Windows.UI.ApplicationSettings.SettingsCommand("PRAppOptions", "PRApp Options", (handler) =>
        {
            PRSettingsFlyout PRSettingsFlyout = new PRSettingsFlyout();
            PRSettingsFlyout.Show();

        });

    args.Request.ApplicationCommands.Add(PRSettingsCmd);
}

Figure 9: Code snippet showing the settings flyout initialization

SettingsFlyout is an excellent feature in Windows 8.1 that iseasy to use and it comes with all the design best practices recommended for Windows Store Apps. In addition, the effort to transition to this native control was painless.

Summary

This article discussed retargeting a sample health care Windows Store App from Windows 8 to Windows 8.1. The steps involved in the retargeting process were covered in detail with relevant screenshots and code snippets. The article concluded with a discussion about replacing a 3rd party library with a native Windows 8.1 control.

Intel, the Intel logo are trademarks of Intel Corporation in the US and/or other countries.

Copyright © 2013 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

++This sample source code is released under the Intel OBL Sample Source Code License (MS-LPL Compatible), Microsoft Limited Public License, and Visual Studio* 2013 License.

Implementing multi-user multi-touch scenarios using WPF in Windows* 8 Desktop Apps

$
0
0

Downloads

Implementing multi-user multi-touch scenarios using WPF in Windows* 8 Desktop Apps [PDF 602KB]
Multiuser-Multitouch-Codesample.zip [ZIP 206KB]

Summary

In this paper we walk through a sample application (in this case a game that quizzes people on the Periodic Table) that enables multi-user, multi-touch capability and is optimized for large touchscreen displays. By using User Controls and touch events, we can enable a scenario where multiple users can play the game at the same time.

Windows Presentation Foundation (WPF) provides a deep touch framework that allows us to handle low-level touch events and support a multitude of scenarios from simple touch scrolling to a multi-user scenario. This game has two areas where users can touch, scroll, and click using their fingers simultaneously while the remainder of the UI remains responsive. Finally, this application was designed and built using XAML and C# and follows the principles of the Model-View-ViewModel software development pattern.

Supporting Large Touch Displays and multiple users in Windows Presentation Foundation

WPF is an excellent framework for building line-of-business applications for Windows desktop systems, but it can also be used to develop modern and dynamic applications. You can apply many of the same principles you use for designing applications in WPF with some small tweaks to make them friendly and easy to use on a large format display.

The XAML markup language has, as a foundational principle, lookless controls. This means that the appearance and styling of a control is separate from the control’s implementation. The control author may provide a default style for the control, but this can easily be overridden. If you place a style in your XAML (inferred or explicit), it will append the base style that ships with the framework. You can also use the template extraction features in Visual Studio* 2012 to make a copy of styles and templates that ship with the .NET framework to use as a starting point.

Let’s look at an example:

To create a window with a custom close button, I created an empty WPF project in Visual Studio and edited the MainWindow.xaml file as follows:

<Window x:Class="ExampleApplication.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
        Title="MainWindow" Height="350" Width="525" WindowStyle="None">
    <Grid>
        <Button HorizontalAlignment="Right" VerticalAlignment="Top" Content="Close" Click="Close_Window" />
    </Grid>
</Window>

I then wrote a C# method to handle closing the window:

        private void Close_Window(object sender, RoutedEventArgs e)
        {
            this.Close();
        }

This created a Window like the one below:

Since we are on the Windows 8 platform, we can use the Segoe UI Symbol font to put the close symbol in the button. You can browse for the symbol you want to use in the Windows Character Map under the Segoe UI Symbol font:

Now that I have the character code, I can begin customizing the button. To start, I added the close symbol to the button:

<Button HorizontalAlignment="Right" VerticalAlignment="Top" FontFamily="Segoe UI Symbol" Content="" Click="Close_Window" />

I also want to style the button to make it touch-friendly by applying an XAML style. This can be done by creating an inherit style that is anywhere above the button in its Visual hierarchy. I will add the Button style to the Window’s resources so that it’s available to any button within the Window:

<Style TargetType="Button">
            <Setter Property="BorderBrush" Value="White" />
            <Setter Property="Background" Value="Transparent" />
            <Setter Property="Foreground" Value="White" />
            <Setter Property="BorderThickness" Value="2" />
            <Setter Property="Padding" Value="12,8" />
            <Setter Property="FontSize" Value="24" />
            <Setter Property="FontWeight" Value="Bold" />
        </Style>

To illustrate this effect, I changed the Window’s background color to white. The above style will result in a button that appears like this:

You can always change the style to have a larger icon and less padding, for example. With buttons and text content, you may find yourself using static padding, margin, and size values since they rarely change. If you want text content to be truly responsive, you can always put text content in a VIewBox so that it scales in size relative to the Window. This isn’t necessary for most large-screen applications, but it is something to consider if your application will operate at very extreme resolutions.

For most UI elements, you will want to base your padding and margins off of relative sizes. This can be accomplished by using a Grid as your layout system. For example, in the demo application, we wanted a very thin amount of space around each periodic table element. I could use a 1px padding around each item, but the appearance of the width of that padding will differ between users on large displays and small displays. You also have to consider that your end users might be using much larger monitors and resolutions than your development environment may support. To resolve this issue, I use the grids to create rows and columns to represent the padding. For example, I can create a grid with 3 rows and 3 columns like below:

<Grid x:Name="tableRoot">
            <Grid.RowDefinitions>
                <RowDefinition Height="0.01*"/>
                <RowDefinition Height="0.98*"/>
                <RowDefinition Height="0.01*"/>
            </Grid.RowDefinitions>
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="0.01*"/>
                <ColumnDefinition Width="0.98*"/>
                <ColumnDefinition Width="0.01*"/>
            </Grid.ColumnDefinitions></Grid>

In grid definition sizing you have three options available. You can do static sizing using an absolute height or width, auto sizing that depends on the content to measure and determine size or relative sizing or you can mix and match the different options. In our example, we make heavy use of the relative sizing. The XAML engine sums the values for the relative sizing and assigns sizing that is equivalent to the ratio of that individual value to the whole. For example, if you have columns sized like below:

<Grid.ColumnDefinitions>
                <ColumnDefinition Width="4*"/>
                <ColumnDefinition Width="7*"/>
                <ColumnDefinition Width="9*"/>
            </Grid.ColumnDefinitions>

The sum of the column widths (4, 7, and 9) is 20. So each width is the ratio of each value to the total of 20. The first column would be 4/20 (20%), the second column would be 7/20 (35%), and the final column would be 9/20 (45%). While this works fine, it’s considered a good practice to have all of your columns (or rows) sum up to either 100 or 1 for simplicity’s sake. In the first example, we make sure that the heights and widths add up to a value of 1. The column and row indexes are zero-based so we can put the content in Column 1 and Row 1 and it will have a 1% padding all around. This is 1% regardless of the resolution and will appear relatively the same to users regardless of their resolution. A padding set to a static size will be much thinner on a large touchscreen display with a high resolution than you expect it to be during development. In the periodic table application, you can see this 1% padding when browsing the table itself:

You can also enable touch scrolling for your application to make it more responsive. Out of the box, WPF allows you to use your finger to scroll through a list element. The ScrollViewer does lock your scrolling to each element so it’s more like flicking between elements. If you want to enable “smooth” scrolling, you should set the PanningMode of the ScrollViewer. By default, the PanningMode is set to None. By setting it to VerticalOnly or HorizontalOnly, you will enable smooth scrolling through items in a list view. In the Periodic table application, the ScrollViewer.PanningMode attached property is used to enable this scenario on a typical ListView. I also set the ScrollViewer.CanContentScroll property to false so that the items will not snap and the user can use their finger to hover between items.

<ListView x:Name="SecondBox" Background="Transparent" ItemsSource="{Binding Source={StaticResource PeriodicData}}" 
                  ScrollViewer.VerticalScrollBarVisibility="Disabled" 
                  ScrollViewer.HorizontalScrollBarVisibility="Visible"
                  ScrollViewer.PanningMode="HorizontalOnly" 
                  ScrollViewer.CanContentScroll="False"></ListView>

The ListView mentioned is used in the application for viewing Periodic table items like below:

Finally, WPF allows us to use the built-in touch support that has been around since Windows 7. Windows recognizes touch input as a mouse when you don’t specifically handle the touch events such as Tapped, ManipulationDelta, and ManipulationEnded. This allows you to handle the event where users tap any of the above items by using the Click event handler. This also minimizes the amount of code necessary to support both touch and a mouse.

Since touch support is implemented on a very low-level, the WPF platform does not group touches by user or clusters. To get around this, you typically see control authors use visual cues (such as a border or a box) to indicate to users that they should touch within a specific area. To support multiple users, we can put the touch-supported controls within a UserControl. The browsable Periodic table that is used to find the Periodic elements as part of this game is a UserControl so we can put as many or as few as we want on a screen by putting the logic into a UserControl.

The Model-View-ViewModel Pattern

When building the application, it would be easy to write the code in the xaml.cs file and call it a day, but we want to maximize code reuse and build an application that is truly modular. We can accomplish this by leveraging the MVVM design pattern. In the Periodic Table application, every screen is bound to a ViewModel. This holds information for data-binding and controls the behaviors of the different Views. We also have a data source that uses XAML and need to manipulate the data source to run the game. The data source will be discussed in greater detail later in this article.

Since MVVM is a popular design pattern, it is possible to use it in the WPF, Windows Store, and Windows Phone platforms. To support this scenario, we can put our Models and ViewModels into Portable Class Libraries (PCLs) that can be referenced by all of those platforms. The PCLs contain the common functionality and namespaces between all of those platforms and allow you to write cross-platform code. Many tools and libraries (such as Ninject, PRISM’s EventAggregator, and others) are available via NuGet and can be referenced in a PCL so you can create large-scale applications. If you need to support a new platform, you simply create new Views and reference the existing ViewModels and Models.

This application is parsing a static data file that contains information about how to render the Periodic table. The Models are aware of the classes in WPF so PCLs would not be appropriate in this example.

In this application, we use the PRISM framework to leverage the already well-built modules for MVVM development.

For the home page, we have a BaseViewModel that has one command. The ExitCommand closes the application when executed. We can bind this command to the button mentioned earlier in the article by applying a data binding to the Button’s Command dependency property.

    public class BaseViewModel : NotificationObject
    {
        public BaseViewModel()
        {
            this.ExitCommand = new DelegateCommand(ExitExecute);
        }

        public DelegateCommand ExitCommand { get; private set; }

        private void ExitExecute()
        {
            Application.Current.Shutdown();
        }
    }

First, the ViewModel inherits from PRISM’s NotificationObject class. This class contains all of the logic to let the View know when a ViewModel’s property is updated. This is accomplished by implementing the INotifyPropertyChanged interface. If you ever want to look at a very solid best-practices implementation of INotifyPropertyChanged, view the source code for the PRISM project to see how the team at Microsoft implemented the interface.

Next, we use the DelegateCommand class from the PRISM framework. DelegateCommand is an implementation of the ICommand interface that is the heart of commanding in WPF. This class can be used to handle a button’s click event and the logic for determining whether a button is enabled. This support not only applies to buttons, but is the primary case when the ICommand is used.

In our BaseViewModel class, we create a new instance of the DelegateCommand class and pass in the ExitExecute action to be executed when the Command is invoked (by pressing the button).

Because you can close the application from any screen, all of the other pages inherit from the BaseViewModel class. To keep all of the game-related logic together, both the 1-player and 2-player games use ViewModels that inherit from a GameViewModel class which in-turn inherits from BaseViewModel.

The GameViewModel class implements publically accessible properties that are used in a game. Below are a couple of example fields that are shown on a game screen:

For example, we have a RoundTimeLeft property that shows how much time you have left in a round. The property is of type TimeSpan and it uses a private backing field. When you set the property, we use a method of the NotificationObject class to notify the View layer that a ViewModel property has been updated.

        private TimeSpan _roundTimeLeft;
        public TimeSpan RoundTimeLeft
        {
            get { return _roundTimeLeft; }
            private set
            {
                _roundTimeLeft = value;
                RaisePropertyChanged(() => RoundTimeLeft);
            }
        }

This is especially useful in situations where you want the View to refresh multiple properties when you update a single field/property. Also, as a performance improvement for advanced applications, it is very common to check if the value has been changed before notifying the view that your property has changed. Below is an example of the HintItem property and Hint property that are used in the ViewModel. The Hint property is the symbol that is shown in the center, and we want to update that text anytime we store a new HintItem in the ViewModel. This is done by letting the View know that the Hint property has been updated:

        private PeriodicItem _hintItem;
        public string Hint
        {
            get
            {
                return this.HintItem != null ? this.HintItem.Abbreviation : string.Empty;
            }
        }

        public PeriodicItem HintItem
        {
            get { return _hintItem; }
            private set
            {
                _hintItem = value;
                RaisePropertyChanged(() => Hint);
                RaisePropertyChanged(() => HintItem);
            }
        }

The Model-View-ViewModel pattern is very powerful and allows testability and expanded code re-use when working with an application. The pattern is also applicable whether you are working on a line-of-business application or a touch application. The GameViewModel class uses a timer and a loop to handle the execution of the game. Both OnePlayerViewModel and TwoPlayersViewModel  inherit from the GameViewModel and add specific logic for each type of game. The application also has a DesignGameViewModel that has a set of static properties so that we can see how the game will look at design time without having to run the application:

Tips & Tricks for building immersive applications in WPF

There are a couple of XAML tricks that are used throughout this application to make it visually appealing and touch friendly. Some are very common, but there are a couple worth highlighting as they use some of the best features of WPF and XAML.

First, the PeriodicTable itself is a WPF UserControl. This allows maximum code re-use as the control can simply be placed on any WPF Window. Within the control, Dependency Properties are used so that you can set features of the control and expose those features externally for data-binding. For example, the PeriodicTable has two states. ZoomedOut is when you see the entire table:

ZoomedIn is when you see the detailed list. When clicking on a Periodic Group from the ZoomedOut view, the game jumps to that group on the ZoomedIn list. There is also a button in the bottom-right corner to zoom back out:

To implement this, there are two list views representing each of the “Views.” A dependency property is created that will expose a property that anybody can set. A PropertyChanged event handler is then created so that the control can respond to changes from both code and data-bindings all in one location:

        public static readonly DependencyProperty IsZoomedInProperty = DependencyProperty.Register(
            "IsZoomedIn", typeof(bool), typeof(PeriodicTable),
            new PropertyMetadata(false, ZoomedInChanged)
        );

        public bool IsZoomedIn
        {
            get { return (bool)GetValue(IsZoomedInProperty); }
            set { SetValue(IsZoomedInProperty, value); }
        }

        public void SetZoom(bool isZoomedIn)
        {
            if (IsZoomedIn)
            {
                FirstContainer.Visibility = Visibility.Collapsed;
                SecondContainer.Visibility = Visibility.Visible;
            }
            else
            {
                FirstContainer.Visibility = Visibility.Visible;
                SecondContainer.Visibility = Visibility.Collapsed;
            }
        }

This dependency property is used in the TwoPlayerView so that we can bind the Second Player’s zoomed in state to a Boolean in the ViewModel called PlayerTwoZoomedIn:

<local:PeriodicTable x:Name="playerTwoTable" IsZoomedIn="{Binding PlayerTwoZoomedIn, Mode=TwoWay}"></local:PeriodicTable>

This implementation loses the flexibility to tie custom features from the control to anything in the ViewModel. In our application, we need to set PlayerTwoZoomedIn (and PlayerOneZoomedIn) to false when a round or the game is reset.

XAML is also heavily used to store the data in this application. While a database or a text file could be created, it seemed to be much more readable to store the Periodic table’s data as XAML. Since XAML is just an XML representation of CLR objects, we could create model classes and corresponding XAML elements. We can then store this in a XAML resource dictionary and load it as data at runtime (or design time if you wish).

For example, we have a class for PeriodicItems that has a very simple definition and is represented by even simpler XAML:

    public class PeriodicItem
    {
        public string Title { get; set; }

        public string Abbreviation { get; set; }

        public int Number { get; set; }
    } 
<local:PeriodicItem Abbreviation="Sc" Title="Scandium" Number="21" />
<local:PeriodicItem Abbreviation="Ti" Title="Titanium" Number="22" />

This made defining the Periodic table easy and readable. You can find all of the Periodic elements used in the application in the PeriodicTableDataSource.xaml file located in the Data folder. Here is an example of a Periodic Group defined in that file.

<local:PeriodicGroup Key="Outer Transition Elements">
                <local:PeriodicGroup.Items>
                    <local:PeriodicItem Abbreviation="Ni" Title="Nickel" Number="28" />
                    <local:PeriodicItem Abbreviation="Cu" Title="Copper" Number="29" />
                    <local:PeriodicItem Abbreviation="Zn" Title="Zinc" Number="30" />
                    <local:PeriodicItem Abbreviation="Y" Title="Yttrium" Number="39" />
                </local:PeriodicGroup.Items>
            </local:PeriodicGroup>

Because of this, the Periodic data is dynamic and can be modified by simply updating the .xaml file. You can also use the same data in both design and runtime Views since it’s compiled and available as a resource for XAML.

Summary

Building an application that supports a large amount of data and advanced touch scenarios is definitely possible in the Windows 8 desktop environment. XAML is a powerful markup language that allows you to not only define dynamic views, but also model your data in a common format that is very easy to read, understand, and parse. You can build touch applications today using the mature WPF platform.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Copyright © 2013 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

Using Intel® Graphics Performance Analyzers to Optimize Splinter Cell*: Blacklist*

$
0
0

Download article

Using Intel® Graphics Performance Analyzers to Optimize Splinter Cell*: Blacklist* [PDF 559KB]

For the most recent installment of Tom Clancy’s* Splinter Cell series, engineers from Ubisoft and Intel analyzed the game to make it run smoothly and achieve the best performance on Intel® hardware. Using Intel® Graphics Performance Analyzers (Intel® GPA), we found some bottlenecks in the frame. Ubisoft was then able to optimize the draw calls we identified, tripling the frame rate. The expensive rendering passes we found were the lighting environment pass and the shadow pass. On a 4th generation Intel® Core™ i7 processor-based desktop, a frame rate of 43 frames per second (fps) was achieved at 1366x768 resolution with low settings, 35 fps on medium settings.

Specifying a Workload

Optimizing a game is full of experiments—changing shaders, using different textures, trying new approaches—to find troublesome components or rendering passes and increase the speed. As with any experiment, an analytical approach is vital in determining the quality of the results. An in-game workload (representative scene where performance is lacking) should be chosen for the analysis. Figure 1 shows one such scene: numerous objects, multiple light sources, and several characters.


Figure 1. Scene chosen for analysis.

Identifying and Addressing the Problems

To see what’s going on here, we used the Intel GPA Monitor to remotely capture a frame for analysis. Loading that in the Intel GPA Frame Analyzer, the problematic ergs (units of work, from the Greek ἔργον (ergon) meaning “work”) can be identified by their charted size, the amount of time the GPU spends on them. By inspecting the ergs individually and mapping them to stages in the game’s pipeline, we isolated two main issues (figure 2): the SeparableSSS pass and the lighting environment pass. The SeparableSSS pass was removed due to the extremely high cost.


Figure 2. Erg view of Intel® GPA frame capture before optimization

This definitely helped the overall performance. With these two ergs out of the way, the lighting environment pass was our next area to tackle. It was problematic not solely because of the amount of time spent on each erg, but the large number of ergs used, combining to constitute a prohibitive amount of total GPU time: ~100 x 1500 μs = 150,000 μs! Figure 3 makes this painfully clear.


Figure 3. Lighting environment pass grows prohibitive

Figure 4 shows the gains after optimizing the lighting environment pass.


Figure 4. Removing the lighting environment pass sped things up significantly

The two ergs tied for second place in height are the ShadowPass.Composite function. These were also optimized.

Final Results

After these changes, combining the highly effective shadow compositing into the rendering passes brought processing costs still lower (figure 5).


Figure 5. Intel® GPA frame capture after optimization

Overall, the performance approximately tripled, achieving 43 fps at 1366x768 resolution with low settings (35 fps on medium settings) on a 4th generation Intel Core i7 processor-based desktop.

Conclusion

Approaching optimization like a series of experiments is useful in finding the root causes of your performance problems. The issues outlined here represent just a few ways to use Intel GPA to streamline your applications. Many more can be found through the documentation, online forums, and articles on Intel® Developer Zone.

About the Author

Brad Hill is a Software Engineer at Intel in the Developer Relations Division. Brad investigates new technologies on Intel hardware and shares the best methods with software developers via the Intel Developer Zone and at developer conferences. He also runs Code for Good Student Hackathons at colleges and universities around the country.

Intel, the Intel logo, and Core are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Viewing all 461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>