Quantcast
Channel: Intel Developer Zone Articles
Viewing all 461 articles
Browse latest View live

How to Integrate Intel® Perceptual Computing SDK with Cocos2D-x

$
0
0

Downloads

How to Integrate Intel® Perceptual Computing SDK with Cocos2D-x [PDF 482KB]

Introduction

In this article, we will explain the project we worked on as part of the Intel® Perceptual Computing Challenge Brazil, where we managed to achieve 7th place. Our project was Badaboom, a rhythm game set in the Dinosaur Era where the player controls a caveman, named Obo, by hitting bongos at the right time. If you’re curious to see the game in action, check out our video of Badaboom:

To begin, you’ll need to understand a bit about Cocos2D-X, an open-source game engine that is widely used to create games for iPhone* and Android*. The good thing about Cocos2D-X is that it is cross-platform and thus is used to create apps for Windows* Phone, Windows 8, Win32*, Linux*, Mac*, and almost any platform you can think of. For more information, go to www.cocos2dx.org.

We will be using the C++ version of the SDK (Version 9302) as well as the Cocos2D-X v2.2 (specifically the Win32 build with Visual Studio* 2012). Following the default pattern of Cocos2D, we will create a wrapper that receives and processes the data from the Creative* Interactive Gesture Camera and interprets it as “touch” for our game.

Setting the environment

To start, you’ll need to create a simple Cocos2D project. We will not cover this subject as it is not the focus of our article. If you need more information, you can find it on the Cocos2D wiki (www.cocos2dx.org/wiki).

To keep it simple, execute the Python* script to create a new project in the “tools” folder of Cocos2d-x and open the Visual Studio project. Now we will add the Intel Perceptual Computing SDK to the project.

To handle the SDK’s input, we will create a singleton class named CameraManager. This class starts the camera, updates the cycle, and adds two images to the screen that represent the position of the hands on the game windows.

CameraManager is a singleton class that is derived from UtilPipeline and imports the “util_pipeline.h” file. Here, we need to reconfigure some of the Visual Studio project properties. Figure 1 shows how to add the additional include directories for the Intel Perceptual Computing SDK.

   $(PCSDK_DIR)/include
   $(PCSDK_DIR)/sample/common/include
   $(PCSDK_DIR)/sample/common/res


Figure 1. Additional include directories

You must also include the following paths to the additional library directories:

   $(PCSDK_DIR)/lib/$(PlatformName)
   $(PCSDK_DIR)/sample/common/lib/$(PlatformName)/$(PlatformToolset)


Figure 2. Additional Library Directories

Add the following dependencies in the input section:

   libpxc_d.lib
   libpxcutils_d.lib


Figure 3. Additional Dependencies

Now we are ready to work on our CameraManager!

Start Coding!

First we need to make the class a singleton. In other words, the class needs to be accessible from anywhere in the code with the same instance (singleton classes have only one instance). For this, you can use a method:

CameraManager* CameraManager::getInstance(void)
{
    if (!s_Instance)
    {
        s_Instance = new CameraManager();
    }

    return s_Instance;
}

After that, we’ll build a constructor, a method that starts the camera:

CameraManager::CameraManager(void)
{
	if (!this->IsImageFrame()){
		this->EnableGesture();

		if (!this->Init()){
			CCLOG("Init Failed");
		}
	}

	this->hand1sprite = NULL;
	this->hand2sprite = NULL;

	hasClickedHand1 = false;
	hasClickedHand2 = false;

	this->inputAreas = CCArray::createWithCapacity(50);
	this->inputAreas->retain();
}

Many of the commands initialize variables that handle sprites which symbolize the users’ hands and get input as they close their hands. The next step is processing the data that comes from the camera.

void CameraManager::processGestures(PXCGesture *gesture){
	
	PXCGesture::Gesture gestures[2]={0};

	gesture->QueryGestureData(0,PXCGesture::GeoNode::LABEL_BODY_HAND_PRIMARY,0,&gestures[0]);
	gesture->QueryGestureData(0,PXCGesture::GeoNode::LABEL_BODY_HAND_SECONDARY,0,&gestures[1]);

	
	CCEGLView* eglView = CCEGLView::sharedOpenGLView();
	switch (gestures[0].label)
	{
	case (PXCGesture::Gesture::LABEL_POSE_THUMB_DOWN):
		CCDirector::sharedDirector()->end();
		break;
	case (PXCGesture::Gesture::LABEL_NAV_SWIPE_LEFT):
		CCDirector::sharedDirector()->popScene();
		break;
	}
}

To be clear, it is in this method that you can also add switch cases to understand voice commands and to implement more gesture handlers. Following this, we must process this information and display it in the CCLayer (Cocos2D sprite layer).

bool CameraManager::Start(CCNode* parent){
	this->parent = parent;

	if (this->hand1sprite!=NULL
    &&  this->hand1sprite->getParent()!=NULL){
		this->hand1sprite->removeFromParentAndCleanup(true);
		this->hand2sprite->removeFromParentAndCleanup(true);
	}

	this->hand1sprite = CCSprite::create("/Images/hand.png");
	this->hand1sprite->setOpacity(150);
    //To make it out of screen
	this->hand1sprite->setPosition(ccp(-1000,-1000));
	this->hand1Pos = ccp(-1000,-1000);
	
	this->hand2sprite = CCSprite::create("/Images/hand.png");
	this->hand2sprite->setFlipX(true);
	this->hand2sprite->setOpacity(150);
	this->hand2sprite->setPosition(ccp(-1000,-1000));
	this->hand2Pos = ccp(-1000,-1000);

	parent->addChild(this->hand1sprite, 1000);
	parent->addChild(this->hand2sprite, 1000);
	
	this->inputAreas->removeAllObjects();
	return true;
}

This method should be called each time a new frame is placed on the screen (most of the time into the onEnter callback). It will automatically remove the hand sprites from the previous parent and add them to the new CCLayer.

Now that our hand sprites have been added to the CCLayer we are able to handle their position by calling the follow method on the update cycle of the CCLayer (which is scheduled by the call: “this->scheduleUpdate();”). The update method is as follows:

void CameraManager::update(float dt){

	if (!this->AcquireFrame(true)) return;

	PXCGesture *gesture=this->QueryGesture();
	
	this->processGestures(gesture);

	PXCGesture::GeoNode nodes[2][1]={0};
	
    gesture->> QueryNodeData(0,PXCGesture::GeoNode::LABEL_BODY_HAND_PRIMARY,1,nodes[0]);
	gesture-> QueryNodeData(0,PXCGesture::GeoNode::LABEL_BODY_HAND_SECONDARY,1,nodes[1]);

	CCSize _screenSize = CCDirector::sharedDirector()->getWinSize();

	
	if (nodes[0][0].openness<20 && !this->hand1Close){
		this->hand1sprite->removeFromParentAndCleanup(true);
		this->hand1sprite = CCSprite::create("/Images/hand_close.png");
		this->hand1sprite->setOpacity(150);
		this->parent->addChild(hand1sprite);
		this->hand1Close = true;
	} else if (nodes[0][0].openness>30 && this->hand1Close) {
		this->hand1sprite->removeFromParentAndCleanup(true);
		this->hand1sprite = CCSprite::create("/Images/hand.png");
		this->hand1sprite->setOpacity(150);
		this->parent->addChild(hand1sprite);
		this->hand1Close = false;
	}
	
	if (nodes[1][0].openness<20 && !this->hand2Close){
		this->hand2sprite->removeFromParentAndCleanup(true);
		this->hand2sprite = CCSprite::create("/Images/hand_close.png");
		this->hand2sprite->setFlipX(true);
		this->hand2sprite->setOpacity(150);
		this->parent->addChild(hand2sprite);
		this->hand2Close = true;
	} else if (nodes[1][0].openness>30 && this->hand2Close) {
		this->hand2sprite->removeFromParentAndCleanup(true);
		this->hand2sprite = CCSprite::create("/Images/hand.png");
		this->hand2sprite->setFlipX(true);
		this->hand2sprite->setOpacity(150);
		this->parent->addChild(hand2sprite);
		this->hand2Close = false;
	}

	this->hand1Pos = ccp(_screenSize.width*1.5-nodes[0][0].positionImage.x*(_screenSize.width*HAND_PRECISION/320) + 100,
						 _screenSize.height*1.5-nodes[0][0].positionImage.y*(_screenSize.height*HAND_PRECISION/240));
	this->hand2Pos = ccp(_screenSize.width*1.5-nodes[1][0].positionImage.x*(_screenSize.width*HAND_PRECISION/320) - 100,
						 _screenSize.height*1.5-nodes[1][0].positionImage.y*(_screenSize.height*HAND_PRECISION/240));

	if (!hand1sprite->getParent() || !hand2sprite->getParent()){
		return;
	}
	this->hand1sprite->setPosition(this->hand1Pos);
	this->hand2sprite->setPosition(this->hand2Pos);

	
    CCObject* it = NULL;
	CCARRAY_FOREACH(this->inputAreas, it)
	{
		InputAreaObject* area = dynamic_cast<InputAreaObject*>(it);
		this->checkActionArea(area->objPos, area->radius, area->sender, area->method);
	}
			
	this->ReleaseFrame();

}

This code not only handles the position of the sprite, it also sets a different sprite (hand_close.png) if the camera detects that the hand is less than 20% open. In addition to this, there is simple logic to create hand precision, which makes the user input more sensitive and easier to get the edges of the screen. We do this because the Perceptual Camera is not that precise on the edges, and the position of the sprites commonly get crazy when we approach the edge.

Now it is indispensable that we add some ways to handle the input (a closed hand is considered a touch). We need to write a method called “checkActionArea” (called in the update method) and register the actionArea.

void CameraManager::checkActionArea(CCPoint objPos, float radius, CCObject* sender, SEL_CallFuncO methodToCall){

	if (sender==NULL)
		sender = this->parent;

	float distanceTargetToHand = ccpDistance(this->hand1Pos, objPos);
	if (distanceTargetToHand<radius){
		if (this->hand1Close&& !hasClickedHand1){

			this->parent->runAction(CCCallFuncO::create(this->parent, methodToCall, sender));
			hasClickedHand1 = true;
		}
	}
	
	if (!this->hand1Close){
		hasClickedHand1 = false;
	} //TODO: repeat for hand2
}

Follow the method registerActionArea() for the registration of areas:

void CameraManager::registerActionArea(CCPoint objPos, float radius, cocos2d::SEL_CallFuncO methodToCall){

	InputAreaObject* newInputArea = new InputAreaObject(objPos, radius, methodToCall);
	this->inputAreas->addObject(newInputArea);
}

Now it is easy to add the Intel Perceptual Computing SDK to your Cocos2D game!!! Just run:

CameraManager::getInstance()->Start(this);

When entering the Layer, register the objects and methods to be called:

CameraManager::getInstance()->registerActionArea(btn_exit->getPosition(), 150, callfuncO_selector(LevelSelectionScene::backClicked));

About us!

We hope you have liked our short tutorial. Feel free to contact us with any issues or questions!

Naked Monkey Games is an indie game studio located at São Paulo, Brazil currently part of the Cietec Incubator. It partners with Intel on new and exciting technology projects!

Please follow us on Facebook (www.nakedmonkey.mobi) and Twitter (www.twitter.com/nakedmonkeyG).

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.


Implementing Multiple Touch Gestures Using Unity* 3D with TouchScript

$
0
0

By Lynn Thompson

Downloads

Implementing Multiple Touch Gestures Using Unity* 3D with TouchScript [PDF 1.48MB]

This article provides an overview and example for the several TouchScript gestures (Press, Release, Long Press, Tap, Flick, Pan, Rotate, and Scale) available when developing touch-based Unity* 3D simulations and applications running on Ultrabook™ devices with the Windows* 8 operating system. TouchScript is available at no cost from the Unity 3D Asset Store.

The example used in this article starts with a preconfigured scene imported from Autodesk 3ds Max*. I then add geometry to the Unity 3D scene to construct graphical user interface (GUI) widgets that accept touch input from the user. The multiple gestures available via TouchScript will be implemented and customized such that adjustments to the widget can be made during runtime, allowing for a GUI widget that can provide a touch UI that is acceptable to a wider audience when running a Unity 3D-based application on Window 8.

Creating the Example

I first import into Unity 3D an Autodesk 3ds Max FBX* export that contains a few geometry primitives and a small patch of banyan and palm trees (see Figure 1). I add a first-person controller to the scene; then, I assign a box collider to the box primitive imported from Autodesk 3ds Max, which acts as the scene’s floor, to prevent the first-person controller from falling out of the scene.


Figure 1. Unity* 3D editor with a scene imported from Autodesk 3ds Max*

Next, I add eight spheres (LeftLittleTouch, LeftRingTouch, LeftMiddleTouch, LeftIndexTouch, RightLittleTouch, RightRingTouch, RightMiddleTouch, and RightIndexTouch) as children of the main camera, which is a child of the first-person controller. I give these spheres a transform scale of x = 0.15 y = 0.30 z = 0.15 and position them in front of the main camera in a manner similar to fingertips on a flat surface. I add a point light above the modified spheres and make a child of the main camera to ensure illumination of the spheres. The layout of these modified spheres is shown in Figure 2.


Figure 2. Unity* 3D runtime with a first-person controller and modified spheres as children for the touch interface

This ends the base configuration of the example. From here, I add TouchScript gestures to the modified spheres and configure scripts to generate a desired touch response.

Adding Press and Release Gestures

The first-person controller from the initialization step of the example contains the JavaScript* file FPSInput Controller.js and the C# script Mouse Look.cs. The FPSInput Controller.js script takes input from the keyboard; Mouse Look.cs, obviously, takes input from the mouse. I modified these scripts to contain public variables that replace vertical and horizontal inputs into FPSInput Controller.js and to replace mouseX and mouseY inputs into the Mouse Look.cs script.

This replacement is fairly straightforward in FPSInputController.js because the keyboard sending a 1, −1, or 0 to the script is replaced with a touch event that results in public variables being changed to a 1, −1, or 0. The touch objects, their respective scripts, and the values they send to script FPSInputController are provided in Table 1 and can be viewed in their entirety in the Unity 3D FirstPerson project accompanying this article.

Table 1. Touch Objects and Corresponding Scripts in FPSInputController.js

Object or AssetScriptPublic Variable Manipulation
LeftLittleTouchMoveLeft.cshorizontal = −1 onPress, 0 onRelease
LeftRingTouchMoveForward.csvertical = 1 onPress, 0 onRelease
LeftMiddleTouchMoveRight.cshorizontal = 1 onPress, 0 onRelease
LeftIndexTouchMoveReverse.csvertical = -1 onPress, 0 onRelease

This method works for controller position because the information is discrete, as are the TouchScript onPress and onRelease functions. For rotation, an angle variable needs to be updated every frame. To accomplish this, I send a Boolean value to a Mouse Look.cs public variable, and the rotation angle is changed in the Mouse Look.cs Update function at a rate of 1 degree per frame accordingly. The touch objects, their respective scripts, and the values they send to the Mouse Look.cs script are provided in Table 2 and can be viewed in their entirety in the Unity 3D FirstPerson project accompanying this article.

Table 2. Touch Objects and Corresponding Scripts in Mouse Look.cs

Object or AssetScriptPublic Variable Manipulation
RightLittleTouchLookDown.cslookDown = true onPress, false onRelease
RightRingTouchLookRight.cslookRight = true onPress, false onRelease
RightMiddleTouchLookUp.cslookUp = true onPress, false onRelease
RightIndexTouchLookLeft.cslookLeft = true onPress, false onRelease

These scripts allow touch interface for first-person shooter (FPS) position and rotation control, replacing keyboard and mouse input.

Using the LongPress Gesture

My original intent for this example was to have the LongPress Gesture make all the touch objects disappear after at least one object had been pressed for a certain amount of time. The touch objects would then all reappear after all touch objects had instigated a release gesture and had not been touched for a certain amount of time. When I tried implementing it this way, however, the behavior was not as I expected, possibly because the LongPress Gesture was used in conjunction with the standard Press and Release Gestures. As a workaround, I implemented this functionality by using the already-implemented Press and Release Gestures in combination with public variables and the delta time method in the system timer.

When initially setting up the Unity 3D scene, I configured a TopLevelGameObject asset to hold the TouchScript Touch Manager and the TouchScript Windows 7 Touch Input script. To facilitate the desired LongPress Gesture, I added a custom C# script named PublicVariables.cs to the TopLevelGameObject asset. I did this not only to hold public variables but also to perform actions based on the state of these variables.

To configure this disappear and reappear functionality, I configured each move and look script associated with its respective touch sphere to have access to the public variables in PublicVariables.cs. PublicVariables.cs contains a Boolean variable for the state of each modified sphere’s move or look Press Gesture, being true when the modified sphere is pressed and false when it is released.

The PublicVariables.cs script uses the state of these variables to configure a single variable used to set the state of each modified sphere’s MeshRenderer. I configure the timer such that if any modified sphere or combination of modified spheres has been pressed for more than 10 seconds, the variable controlling the MeshRenderer state is set to False. If all of the spheres have been released for more than 2 seconds, the MeshRenderer state is set to True. Each move and look script has in its Update function a line of code to enable or disable its respective sphere’s MeshRenderer based on the state of this variable in PublicVariables.cs.

This code results in all of the modified spheres disappearing when any sphere or combination of spheres has been pressed for more than 10 consecutive seconds. The modified spheres then all reappear if all modified spheres have been released for more than 2 seconds. By enabling and disabling the modified spheres’ MeshRenderer, only the modified sphere’s visibility is affected, and it remains an asset in the scene and is able to process touch gestures. As such, the modified spheres are still used to manipulate the scene’s first-person controller. The user is required to intuitively know where the spheres are positioned and be able to use them while they are not being rendered to the screen. Examine the PublicVariables, Move, and Look scripts in the example provided to see the code in its entirety.

The Tap Gesture

To demonstrate the use of multiple gestures with one asset, I add the Tap Gesture to all four move spheres. The Tap Gesture is configured in all four of the left GUI widget’s modified spheres’ respective move scripts. The move scripts are then configured for access to the first-person controller’s Character Motor script. I configure the tap functions in each move script to manipulate the maximum speed variables in the Character Motor’s movement function.

The MoveForward script attached to the LeftRingTouch modified sphere is configured so that a Tap Gesture increases the maximum forward speed and maximum reverse speed by one. I configure the MoveReverse script attached to the LeftIndexTouch modified sphere for a Tap Gesture to decrease the maximum forward speed and maximum reverse speed by one. I configure the MoveLeft script attached to the LeftLittleTouch modified sphere for a Tap Gesture to increase the maximum sideways speed by one and the MoveRight script attached to the LeftMiddleTouch modified sphere for a Tap gesture to decrease the maximum sideways speed by one. The maximum speed variables are floating-point values and can be adjusted as desired.

When using the default settings with the Tap Gesture, the speeds change during the period when the user may want to press the modified sphere to instigate movement. In short, Press and Release Gestures are also considered Tap Gestures. To mitigate this behavior, I changed the Time Limit setting in the Will Recognize section of the Tap Gesture (see Figure 3) from Infinity to 0.25. The lower this setting, the sharper the tap action must be to instigate the Tap Gesture.


Figure 3. Unity* 3D editor showing a modified Time Limit setting in a Tap Gesture

The modified sphere can be used to navigate the scene and adjust the speed at which the scene is navigated. A quirk of this method for navigating and adjusting speed is that when a Tap Gesture is used to adjust speed, the first-person controller is also moved in the direction associated with the modified sphere that was tapped. For example, tapping the LeftIndexTouch modified sphere to decrement the maximum forward speed and maximum reverse speed slightly moves the first-person controller, and subsequently the scene’s main camera, in reverse. In the accompanying Unity 3D project, I add GUI labels to display the maximum speed setting so that the labels can be visualized when tapping the modified spheres. You can remove this quirk by adding a GUI widget component that, when used, disables the Press and Release Gestures, allowing the user to tap the GUI widget component without moving the main scene’s camera. After the maximum forward speed and maximum reverse speed are set to the user’s preference, the new GUI widget component can be used again to enable the Press and Release Gestures.

When developing this portion of the example, I intended to add a Flick Gesture in combination with the Tap Gesture. The Tap Gesture was going to increase speed, and the Flick Gesture was intended to decrease speed. However, when adding both the Flick and the Tap Gestures, only the Tap Gesture was recognized. Both worked independently with the Press and Release Gestures, but the Flick Gesture was never recognized when used in conjunction with the Tap Gesture.

The Flick Gesture

To demonstrate the Flick Gesture, I add functionality to the modified spheres on the right side of the screen. The look scripts are attached to these spheres and control the rotation of the scene’s main camera, which is a child of the first-person controller. I begin by adding a Flick Gesture to each sphere. I configure the Flick Gestures added to the RightTouchIndex and RightTouchRing modified spheres that control horizontal rotation with their touch direction as horizontal (see Figure 4). I configure the Flick Gestures added to the RightTouchMiddle and RightTouchLittle modified spheres that control vertical rotation with their touch direction as vertical. This may be useful when the modified spheres have disappeared after being pressed for 10 or more seconds and the touch interface does not respond to the user’s flick (as opposed to responding in an undesired manner). The user then knows that the touch interface–modified spheres need to be released, allows 2 seconds for the modified spheres to reappear, and then reengages the touch GUI widget.


Figure 4. Unity* 3D editor showing a modified Direction setting in a Flick Gesture

Each look script uses the public variables that exist in the Mouse Look script. When a modified sphere is flicked, the Mouse Look script instigates a rotation in the respective direction, but because there is no flick Release Gesture, the rotation continues indefinitely. To stop the rotation, the user must sharply press and release the modified sphere that was flicked. This action causes an additional degree of rotation from the Press Gesture but is followed by the Release Gesture, which sets the respective rotation public variable to False, stopping the rotation.

Like the Tap Gesture, the Flick Gesture now works in conjunction with the Press and Release Gestures. Users can still rotate the scene’s main camera by holding down the appropriate modified sphere, releasing it to stop the rotation. With the Flick Gesture implemented, users can also flick the desired modified sphere to instigate a continuous rotation that they can stop by pressing and releasing the same modified sphere.

The Remaining Gestures

To this point in the example, all of the gestures implemented enhance the user’s ability to directly navigate the scene. I use the remaining gestures (Rotate, Scale, and Pan) to allow the user to modify the touch targets’ (the modified spheres) layout for improved ergonomics.

Also, up to this point, all of the gestures are discrete in nature. An immediate action occurs when a Unity 3D asset is tapped, pressed, released, or flicked. This action may be the setting of a variable that results in a continuous action (the flick-instigated rotation), but the actions are discrete in nature. The Rotate, Scale, and Pan Gestures are continuous in nature. These gestures implement a delta method where the difference between the current state of the gesture and that of the previous frame is used in the script to manipulate a Unity 3D screen asset as desired.

The Rotate Gesture

I add the Rotate Gesture in the same way as previous gestures. I use the Add Component menu in the Inspector Panel to add the TouchScript gesture, and the script attached to the touch asset receiving the gesture is modified to react to the gesture. When implemented, the Rotate Gesture is instigated by a movement similar to using two fingers to rotate a coin on a flat surface. This action must occur within an area circumscribed by the Unity 3D asset receiving the gesture.

In this example, rotating the modified spheres results in the capsule shape becoming more of a sphere as the end of the modified sphere is brought into view. This behavior gives the user an alternate touch target interface, if desired. In this example, this functionality is of more use for the modified spheres on the right side of the screen. For the rotate widget on the right side of the screen, the user can flick the appropriate modified sphere for constant rotation up, down, left, or right. I configure the modified spheres controlling vertical rotation with vertical flicks. I configure the modified spheres controlling horizontal rotation with horizontal flicks. The modified spheres controlling horizontal rotation can now be rotated so that the longest dimension is horizontal, allowing for a more intuitive flicking action.

When rotating the modified spheres that are closest to the center of the screen, the modified spheres take on a more spherical appearance. The farther away from the center screen the modified sphere is when being rotated, the modified sphere takes on a more capsule-like appearance. This is an effect of the modified sphere’s distance from the scene’s main camera. It may be possible to mitigate this affect by adjusting the axes on which the modified sphere rotates. The following line of code does the work of rotating the modified sphere when the Rotate Gesture is active:

targetRot = Quaternion.AngleAxis(gesture.LocalDeltaRotation, gesture.WorldTransformPlane.normal) * targetRot;

The second argument in the Quaternion.AngleAxis is the axis on which the modified sphere rotates. This argument is a Vector3 and can be changed as follows:

targetRot = Quaternion.AngleAxis(gesture.LocalDeltaRotation, new Vector3(1, 0, 0)) * targetRot;

By adjusting this Vector3 as a function of the modified sphere’s distance from the position relative to the scene’s main camera, I can remove the effect, resulting in the modified sphere’s appearance being more consistent and spherical across all the spheres.

The Scale Gesture

I add the Scale Gesture as an additional means of altering the modified sphere’s presentation. When rotated, the resulting circular touch target may not be large enough for the user’s preference. The user can employ the Scale Gesture to modify the size of the touch target.

The motion used to instigate a Scale Gesture is similar to the Pinch Gesture used on mobile devices. Two fingers are moved apart and brought together for a scale-down gesture. The fingers are together and moved apart to instigate a scale-up gesture. The code in the accompanying Unity 3D project scales the target asset uniformly. This is not required: You can code for scaling on any combination of the x, y, or z axes.

An additional feature that may help with user utilization of the GUI widgets is automatic scaling following the 10 seconds of constant use, resulting in the disappearance of the GUI widgets. By automatically multiplying a modified sphere’s transform.localscale by 1.1 whenever the modified sphere’s MeshRenderer has been disabled, the user automatically gets a larger touch target, which may reduce the user’s need to intermittently release the GUI widgets to confirm the modified sphere’s location on the touch screen.

The Pan Gesture

For the purposes of ergonomics, the Pan Gesture is probably the most useful gesture. It allows users to touch the objects to be manipulated and drag them anywhere on the screen. As the modified spheres are initially positioned, users may, depending on the Ultrabook device they are using, have wrists or forearms resting on the keyboard. With the Pan Gesture functionality implemented, users can drag the modified spheres to the sides of the screen, where there may be less chance of inadvertently touching the keyboard. For additional ergonomic optimization, users can touch all four modified spheres that affect the first-person controller and drag them at the same time to a place on the screen that allows them to rest their wrists and arms as desired.

The following two lines of code, taken from a Unity 3D example, do the work of moving the Unity 3D scene asset when the Pan Gesture is active:

var local = new Vector3(transform.InverseTransformDirection(target.WorldDeltaPosition).x, transform.InverseTransformDirection(target.WorldDeltaPosition).y, 0);
targetPan += transform.parent.InverseTransformDirection(transform.TransformDirection(local));

Note that in the above code, the z component of the Vector3 is zero and that in the accompanying example, when the modified spheres are moved, or panned, they move only in the x–y plane. By modifying this Vector3, you can customize the interface a great deal. The first example that comes to mind is having a Pan Gesture result in a much faster Unity 3D asset motion on one axis than another.

In the “Everything” example provided with TouchScript, the following line of code limits the panning of the manipulated asset on the y-axis:

if(transform.InverseTransformDirection(transform.parent.TransformDirection(targetPan - startPan)).y < 0) targetPan = startPan;

This line was commented out in the accompanying example but can easily be modified and implemented if you want to limit how far a user can move a GUI widget component from its original position.

Video 1: Touch Script Multi Gesture Example

Resolving Issues During Development of the Example

One issue I found during development was that the Rotate Gesture never seemed to be recognized when the Press, Release, and Tap Gestures were added. To work around this issue, I added a modified sphere to the GUI widget on the left side of the screen intended for use by the left thumb. I configured this modified sphere with a script (ToggleReorganize.cs) so that when a user taps the modified sphere, a Boolean variable is toggled in the PublicVariables script. All of the modified sphere’s scripts reference this variable and disable their Press, Release, Tap, or Flick Gesture when the toggle variable is True, resulting in a UI that requires the user to tap the left thumb button to modify the widget. The user must then tap this left thumb button again when finished modifying the widget to go back to navigating the scene.

During the process of implementing this functionality, I discovered that the right widget did not require this functionality for the widget to be modified. The user could rotate, pan, and scale the widget without tapping the left thumb modified sphere. I implemented the functionality anyway, forcing the user to tap the left thumb modified sphere in the left widget to alter the ergonomics of the right widget. I did this because the right widget became awkward to use when it was modified at the same time it was being used to navigate the scene.

Looking Ahead

In addition to the Unity 3D scene navigation control, users can customize the GUI widgets. They can rotate, scale, and move (pan) the components of the widget to suit their ergonomic needs. This functionality is valuable when developing applications that support multiple platforms, such as Ultrabook devices, touch laptops, and tablets. These platforms can be used in any number of environments, with users in a variety of physical positions. The more flexibility the user has to adjust GUI widget configuration in these environments, the more pleasant the user’s experience will be.

The GUI widgets used in the accompanying example can and should be expanded to use additional GUI widget components designed for thumb use that can control assets in the game or simulation or control assets that are components of the GUI widgets. This expansion may include items in the simulation, such as weapons selection, weapons firing, camera zoom, light color, and jumping. To alter the GUI widget components, these thumb buttons can change the modified spheres to cubes or custom geometry. They can also be used to change the opacity of a material or color that GUI widget components use.

Conclusion

This article and the accompanying example show that using TouchScript with Unity 3D is a valid means of implementing a user-configurable GUI widget on Ultrabook devices running Windows 8. The GUI widgets implemented in the example provide a touch interface for the Unity 3D first-person controller. This interface can similarly be connected to the Unity 3D third-person controller or custom controllers simulating an overhead, driving, or flying environment.

When developing Unity 3D GUI widgets for Ultrabook devices running Windows 8, the desired result is for users to not revert back to the keyboard and mouse. All of the functionality that is typically associated with a legacy UI (a keyboard and mouse first-person controller) should be implemented in a production-grade touch interface. By taking this into consideration when implementing the TouchScript gestures described in this article and the accompanying example, you can greatly increase your prospects for obtaining a positive user response.


Note: The example provided with this article uses and references the examples provided with TouchScript as downloaded at no cost from the Unity 3D Asset Store.


About the Author

Lynn Thompson is an IT professional with more than 20 years of experience in business and industrial computing environments. His earliest experience is using CAD to modify and create control system drawings during a control system upgrade at a power utility. During this time, Lynn received his B.S. degree in Electrical Engineering from the University of Nebraska, Lincoln. He went on to work as a systems administrator at an IT integrator during the dot com boom. This work focused primarily on operating system, database, and application administration on a wide variety of platforms. After the dot com bust, he worked on a range of projects as an IT consultant for companies in the garment, oil and gas, and defense industries. Now, Lynn has come full circle and works as an engineer at a power utility. He has since earned a Masters of Engineering degree with a concentration in Engineering Management, also from the University of Nebraska, Lincoln.

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Intel Software License Manager Getting Started Tutorial

$
0
0
Intel® Software License Manager - Getting Started Tutorial

Contents

Obtain your License File
    Who is eligible to obtain a license file?
Set up the License Manager
Set up Clients
Troubleshooting, Tips & Tricks
Disclaimer and Legal Information

The Intel® Software License Manager is required to manage Intel® Software Development Products floating licenses in a development environment. The license manager is NOT required for single-user licenses.
 

This tutorial will help to setup your floating license environment very quickly.
Follow these steps and choose the setup scenario that matches best:

  • Obtain your license file
  • Set up the license manager
  • Set up clients

Obtain your License File

Please refer to the note 'Who is eligible to obtain a license file?' below.
You need a MAC address of the license server (optional the license server name) to register and create your license file.

  1. Obtain a MAC address of the license server.
    If you have the license manager tools already installed you can run:
        lmutil lmhostid
    and take any of the addresses issued. 
    Otherwise run:
    OS X*, Linux*:
        ifconfig
        and take any of the 'HWaddr' values displayed
    Windows*:
        ipconfig /all
    and take any of the 'Physical Address' values displayed
    Note:
    Remove all special characters from the string and - if required - fill up with leading zeros to get your 12-digits alphanumeric string which is your Host ID.

  2. Obtain the license server name (optional)
    Specifying the real server name is optional at the registration step. You can enter a placeholder during registration and modify the server name in the license file later with a plain text editor.
    Linux / OS X:
         hostname
    Windows:
         echo %COMPUTERNAME%
     
  3. Register your Serial Number
    Login to the Intel® Registration Center and enter your SN and email, or if you are already logged in, your SN at the bottom of the "My Products" page.
    Enter your Host ID and server name you obtained in Step 1.
    The license file will be emailed to the email address associated with your license registration.

    Note:
    Registration of the Intel® Cluster Studio XE for Linux* is a two-steps process. During registration you need to accept the product EULA. If you missed to accept the EULA you will not get a license file attached to the registration notification email from Intel. In that case visit IRC and register this serial number again where you will have the chance to accept the EULA.

Who is eligible to obtain a license file?

Only the license owner or license administrator can perform a license registration and obtain a license file. Registered users have the right to download products and have read-access to the license history only.

    How can I identify which role I have?

  1. Login to the Intel Registration Center
  2. Click on the related product on the left hand column "Product Subscription Information", for example:     
  3. On the 'Subscription History' page click on 'Manage', for example:
        
  4. If you have access to the 'Manage' too you are either the owner or license administrator. Registered users don't have access to the 'Manage' tool.
  5. If you have access you will see the Manage Page where you can see who the license owner and the license administrator are, for example:
        
     

Set up the License Manager

You can use an existing FlexNet or FlexNet Publisher license manager (lmgrd) if its version is the same or higher as the version of the Intel vendor daemon (INTEL). The description in this section applies also to a redundant servers installation where all steps need to be done on all 3 license servers.
A single license server or a redundant license server configuration can manage licenses from clients running on different operating systems.
Please refer also to the Floating Licenses and Software License Manager Compatibility section.
Please set up your license manager according to one of the following scenarios that match best:

I have to install a new license manager on a license server

  1. Download
    Go to the License Manager Download Page (if you are logged out from IRC, the login prompt appears first)
    Select the right platform from the drop-down menu and download the right version:
    _ia32: is for 32-bit operating systems
    _intel64 is for Intel® 64 operating systems
    The Intel® Software License Manager User's Guide is also available from this download page.
    Notes:
    - On the license manager download page there is the most current version available only.
    - Access problems to this site indicate that you don't have a valid floating license registration associated to your email.
    - Only license owners and license administrators have access to that site

  2. Installation
    Linux* / OS X*:
    - Copy the downloaded file into the directory you want to install the license server.
    - Extract the downloaded file, for example:
           tar xzvf l_isl_server_p_2.2.005_intel64.tar.gz  (Linux)
           tar xzvf m_isl_server_p_2.2.005.tar.gz  (OS X)

    - Change to directory flexlm and run the installer, for example:
          cd flexlm 
          Install_INTEL
    Windows:
    - Run the self-extracting w_isl_server_p_2.xxx.exe file and follow the instructions provided with the graphical installation wizard.
     
  3. Start the license manager
    Linux* / OS X*:
    From the license manager directory /flexlm start the license manager lmgrd. You can specify a single license file or a license directory (if you more license files). By default, the license file name is server.lic which was created by the license manager installation. However you can move any valid floating license into the /flexlm directory and start it with the license manager. Creating a log file with option -l is optional.
         lmgrd -c server.lic -l flexnet_logfile.txt (reads server.lic license file and creates logfile)
         lmgrd -c . -l flexnet_logfile.txt               (reads al .lic files from the flexlm directory and creates logfile)

    Windows:
    You can run the license manager under Windows as system service (recommended) or as user application in a command shell (sometimes required for some Windows OS versions).
    System service:
    - Start > Intel(R) Software Development Products > Intel(R) Software License Manager and browse to the license file.
    User application:
    - Open a command shell
    - Navigate to c:\Program Files (x86)\Common Files\Intel\LicenseServer\
    - lmgrd -c server.lic -l flexnet_logfile.txt      (reads server.lic license file and creates logfile)

I need to add another Intel software license to an existing installation.
You can combine licenses into one single license file (.lic) or copy all license files into a license directory
   
     Combine licenses

  1. Make sure that the SERVER and VENDOR lines are identical
  2. Remove the SERVER and VENDOR lines of your new license file
  3. Copy rest of contents of new license into the existing license file
  4. Have the license manager re-read the combined license file, for example
         lmgrd lmreread –c <combined_license_file>
    On Windows you can re-read the new license also via the license manager GUI (Start > Intel(R) Software Development Products)

     Create license directory

  1. Create a new license directory (recommended: Use the flexlm installation directory)
  2. Copy all existing and new license files there (no need to edit and combine license files)
  3. Have the license manager re-read the combined license file
         lmgrd lmreread –c <license_dir>
    On Windows you can re-read the new license also via the license manager GUI (Start > Intel(R) Software Development Products)

I need to add an Intel software license to an existing FlexNet/FlexNet Publisher license manager (lmgrd) already running for 3rd party SW.
Same as above, but you need to make sure that the license manager lmgrd is of same or higher version as the Intel Vendor Daemon INTEL/INTEL.exe.

     Check for compatibility

  1. Download and install the Intel license manager as described above
  2. Run the Intel Vendor Daemon as follows:
         INTEL -v
  3. Run the license manager as follows:
         lmgrd -v
    If the major and minor versions of lmgrd is the are the same or higher than the INTEL vendor daemon's versions you can use the existing license manager with the Intel Vendor Daemon. Otherwise you need to install an Intel License Manager in parallel to the existing license manager and run it on a different port.

     Add Intel Vendor Daemon to an existing license manager

  1. Download the Intel license manager package as described above.
  2. Extract downloaded package on a temp dir:
    Note: Under Windows you cannot change the installation directory. So in order to not overwrite an existing license manager installation, install it on another PC!
  3. Take the Intel Vendor Daemon file INTEL / INTEL.exe and copy it to a directory on the license server (recommended: Use existing directory where the license manager lmgrd / lmgrd.exe) is running.
  4. If required (i.e. if you copy the Intel vendor daemon not into the default directory) edit the license file and specify a path where the vendor daemon is located, for example
         VENDOR /flexlm/INTEL
    or
         VENDOR c:\Program Files (x86)\Common Files\Intel\LicenseServer\INTEL.exe
  5. Combine licenses or create a new license directory as described above.
  6. Restart the license manager as described above.

Setup Clients

There are several ways to setup a floating licensed product on client machine, but the most appropriate ones for an existing product installation or new / update installation on clients are the following.
Please note that this chapter applies to a single-license-server configuration only. Please refer to the Tips & Tricks section to use this method also for a redundant server configuration.

Product already exists on client
In this case you only need to create a new license file:

  1. Use an editor and create a file with extension .lic with the following contents:
    SERVER <server_name|IP-address> <ANY|<server_MAC_address> <port>
    USE_SERVER
  2. Copy the file to the default license directory:
    Linux:
    /opt/intel/licenses (for a (sudo)root installation)
    $HOME/intel/licenses (for a user installation)
    OS X:
    /Users/Shared/Library/Application Support/Intel/Licenses
    Windows:
    c:\Program Files (x86)\Common Files\Intel\Licenses\ (on 64-bit Windows systems)
    c:\Program Files\Common Files\Intel\Licenses\ (on 32-bit Windows systems)
  3. Remove (or rename) all old or unused .lic files in the license directory

Update installation on client

  1. Start the product installer
  2. When it comes to product activation, the default activation option will be "Use Existing Activation"
    Select that option only if you are sure that the right server license was already set up with a previous installation. Otherwise select the following:
    Use "Alternative Activation > Use License Server" and specify the server name and port of the license server.

New product installation on client
 

  1. Start the product installer
  2. When it comes to product activation, the default activation option will be "Use Serial Number"
  3. Do not use this option, but select "Alternative Activation > Use License Server" instead and specify the server name and port of the license server.
    CAVEAT! In complex environments this step may take long time (several minutes) until client-server connection and license verification/installation is finished.

Troubleshooting, Tips & Tricks
 

How to best setup a redundant server configuration

Setup of a redundant license server configuration remotely as described above is not supported by default, but you can perform the same steps as described above by referencing one of the servers only. After installation and server set up you can "expand" your client license by adding the other 2 servers in the client license, such as:

     SERVER <servername1> ANY <port>
     SERVER <servername2> ANY <port>
     SERVER <servername3> ANY <port>
     USE_SERVER

How to check license configuration from client

You can use the lmutil tool also on the client to perform the same license server checks as on the server.
Copy over the lmutil tool from the license server to the client or obtain it from FLEXERA's webpage (http://www.globes.com/support/fnp_utilities_download.htm).
Instead of using the license file or license directory use port@server as license parameter to invoke lmutil on the client, for example:
     lmutil lmstat -a -c <port>@<servername>
     lmutil lmdiag -c <port>@<servername>

How to check which license is being used

Linux only:
On Linux a very comfortable method is available to check which license and license server are being used. Create the enviornment variable FLEXLM_DIAGNOSTICS, assign it value 3 and invoke any of the tools icc, ifort, amplxe-cl, inpxe-cl or idbc, for example:

    

Linux/OS X:
Create the INTEL_LM_DEBUG enviornment variable, assing it a log file name and  invoke any of the tools icc, ifort, amplxe-cl, inpxe-cl or idbc. Search the logfile for any occurrence of the string SMSAxxxxxxxx. This 8-characters suffix is the serial number of the product that was used by invoking the product.For example:

    

Windows:
Open an Intel Software Development Product command prompt from the Start menu, create the INTEL_LM_DEBUG enviornment variable, assing it a log file name and  invoke any of the tools icl, ifort, amplxe-cl or inpxe-cl. Search the logfile for any occurrence of the string SMSAxxxxxxxx. This 8-characters suffix is the serial number of the product that was used by invoking the product.For example: 
      
       

Mixed Windows/Linux/OS X license server/client environment

You can run a single license server to manage a combined license for different client operating systems. The license server can run on Windows, Linux or OS X. In a heterogeneous environment it may be required to specify the full license server name including the full primary DNS suffix or the IP address in the license files of the clients.

Floating Licenses and Software License Manager Compatibility

In order to have full functionality of floating licensing service it is strongly recommended to use a license manager version that is specified for use with your product (please refer to the 'System Requirements' in the product release notes). Incompatible Intel license manager versions may result in reduced functionality when updating products on your client machines; for example they may not detect existing product activations or allow product activation via license server.

If you encounter problems updating products try one of the following workarounds:

  • Upgrade Intel® Software License Manager to the newest version available on the Intel® Registration Center (choose the right OS and platform; only users with floating license registrations have access to this link).
  • Use serial number (alphanumeric code of format xxxx-xxxxxxxx)
  • Use license file (file extension .lic).

Note: Remote activation is not supported for floating licenses.

 


Disclaimer and Legal Information

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or by visiting Intel's Web Site.
Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families. See http://www.intel.com/products/processor_number for details.
Intel, Intel Atom, and Intel Core are trademarks of Intel Corporation in the U.S. and/or other countries.

* Other names and brands may be claimed as the property of others.
Java is a registered trademark of Oracle and/or its affiliates.
Copyright © 2013, Intel Corporation. All rights reserved.

Creating Eclipse External Tool configurations for Intel® VTune Amplifier

$
0
0

 

Introduction

The purpose of this article is to show how we can easily integrate VTune™ Amplifier into Eclipse* by using the External tool configuration capability in Eclipse.  We will create several Eclipse external tool configurations for both launching collections and displaying collected data.

Overview

These are are the tool configurations we will be creating:

  1. Local collection running for a duration of 5 seconds with results stored in Eclipse project.
  2. Displaying collected results in the VTune Amplifier GUI.
  3. Local collection launching an application created in an Eclipse project
  4. Remote collection running on an embedded target for 5 seconds.

Details

External tool configuration #1 - Local collection running for a duration of 5 seconds with results stored in Eclipse project

Select Eclipse menu item Run->External Tools->External Tools Configurations…

This will bring up the External tool configuration dialog.

In our first configuration will be running a local advanced-hotspot collection for a duration of 5 seconds.

Click on the “New Item” icon.

  • Name = amplxe-local
  • Location is the local of the amplxe-cl executable in your file system. (/opt/intel/vtune_amplifier_xe_2013/bin64/amplxe-cl
  • Arguments= -collect advanced-hotspot –r ${project_loc}/vtune_result@@@{at} –d 5

Once you have specified these values you can click apply to save. Then you can kick off a data collection by clicking Run.

Now that you have created a configuration you can kick off a run by using the menu item: Run->External Tools->amplxe-local.

This run will create a VTune Amplifier result directory in your project directory. 

External tool configuration #2 - Displaying collected results in the VTune Amplifier GUI

We can create a new External tool configuration to open this result using the VTune™ Amplifier GUI.

Select Eclipse menu item Run->External Tools->External Tools Configurations…

This will bring up the External tool configuration dialog.

Click on the “New Item” icon.

  • Name = amplxe-gui-project
  • Location is the file path of the amplxe-cl executable in your file system. (/opt/intel/vtune_amplifier_xe_2013/bin64/amplxe-gui
  • Arguments=${resource_loc}

Click Apply, Then Click on the VTune Amplifier result directory in your project, the click Run. This will launch the VTune Amplifier GUI on the result directory you have selected.

External tool configuration #3 - Local collection launching an application created in a Eclipse project

For our next configuration we will reference an application that we built as part of the Eclipse/CDT. We will call this configuration amplxe-local1, in this case all of values will be the same as the configuration except we will not specify the –d 5 argument. Instead we will specify the location of the binary in the Eclipse project. To do this remove the –d 5 and click on the Variables button. Then scroll down to the resource_loc variable and click on it. Now if you click on Run->External Tool->amplxe-local1 it will launch a amplxe-cl run data collection with whatever executable you have selected in your project.

 

External tool configuration #4 - Remote collection running on an embedded target for 5 seconds

In our last tool configuration we will launch a remote amplxe-cl collection on an embedded target. Again, Specify the values as previously stated but instead of the Variable resource_loc, this time specify string_prompt.  Before the $string_prompt variable specify the following syntax. –target ssh:root@${string_prompt} as specified below:

.

Now when you click on Run->External Tool->amplxe-remote it will bring a dialog to prompt you for the target where you want to launch your collection.

Summary

We have shown 4 ways that VTune Amplifier can easily be integrated into Eclipse. There are many such integrations possible. VTune Amplifier has a very powerful command-line and GUI that can be used to make working with VTune Amplifier in Eclipse relatively seamless. 

 

Gameplay: Touch controls for your favorite games

$
0
0

Download Article

Download Gameplay: Touch controls for your favorite games [PDF 703KB]

GestureWorks Gameplay is a revolutionary new way of interacting with popular PC games. Gameplay software for Windows 8 lets gamers use and build their own Virtual Controllers for touch, which are overlaid on top of existing PC games. Each Virtual Controller overlay adds buttons, gestures, and other controls that are mapped to input the game already understands. In addition, gamers can use hundreds of personalized gestures to interact on the screen. Ideum’s collaboration with Intel gave them access to technology and engineering resources to make the touch overlay in Gameplay possible.

Check out this one-minute video that explains the Gameplay concept.

It’s all about the virtual controllers

Unlike traditional game controllers, virtual controllers can be fully customized and gamers can even share them with their friends. Gameplay works on Windows 8 tablets, Ultrabooks, 2-in-one laptops, All-In-Ones, and even multitouch tables and large touch screens.


Figure 1 - Gameplay in action on Intel Atom-based tablet

"The Virtual Controller is real! Gameplay extends hundreds of PC games that are not touch-enabled and it makes it possible to play them on a whole new generation of portable devices, " says Jim Spadaccini, CEO of Ideum, makers of GestureWorks Gameplay. "Better than a physical controller, Gameplay’s Virtual Controllers are customizable and editable. We can’t wait to see what gamers make with Gameplay."


Figure 2 - The Home Screen in Gameplay

Several dozen pre-built virtual controllers for popular Windows games come with GestureWorks Gameplay (currently there are over 116 unique titles). Gameplay lets users configure, layout, and customize existing controllers as well. The software also includes an easy to use, drag-and-drop authoring tool allowing users to build their own virtual controller for many popular Windows-based games distributed on the Steam service.


Figure 3 - Virtual Controller layout view

Users can place joysticks, D-pads, switches, scroll wheels, and buttons anywhere on the screen, change the size, opacity, and add colors and labels. Users can also create multiple layout views which can be switched in game at any time. This allows a user to create unique views for different activities in game, such as combat versus inventory management functions in a Role Playing Game.


Figure 4 - Virtual Controller Global Gestures View

Powered by the GestureWorks gesture-processing engine aka GestureWorks Core, Gameplay provides support for over 200 global gestures. Basic global gestures such as tap, drag, pinch/zoom, and rotate are supported by default, but are also customizable. This allows extension of overlaid touch controllers, giving gamers access to multi-touch gestures that can provide additional controls to PC games. For example, certain combat moves can be activated with a simple gesture versus a button press in a FPS. Gameplay even includes experimental support for accelerometers so you can steer in a racing game by tilting your Ultrabook™ or tablet, and it detects when you change your 2-1 device to tablet mode to optionally turn on the virtual controller overlay.

Challenges Addressed During Development

Developing all this coolness was not easy, to make the vision for Gameplay a reality, several technical challenges had to be overcome. Some of these were solved using traditional programming methods, while others required more innovative solutions.

DLL injection

DLL injection is a method used for executing code within the address space of another process by getting it to load an external dynamically-linked library. While DLL injection is often used by external programs for nefarious reasons, there are many legitimate uses for it, including extending the behavior of a program in a way its authors did not anticipate or originally intend. With Gameplay, we needed a method to insert data into the input thread of the process (game) being played so the touch input could be translated to inputs the game understood. Of the myriad methods for implementing DLL injection, Ideum chose to use the Windows hooking calls in the SetWindowsHookEx API. Ultimately, Ideum opted to use process-specific hooking versus global hooking for performance reasons.

Launching games from a third-party launcher

Two methods of hooking into a target processes address space were explored. The application can hook into a running process’ address space, or the application can launch the target executable as a child process. Both methods are sound; however, in practice, it is much easier to monitor and intercept processes or threads created by the target process when the application is a parent of the target process.

This poses a problem for application clients, such as Steam and UPlay, that are launched when a user logs in. Windows provides no guaranteed ordering for startup processes, and the Gameplay process must launch before these processes to properly hook in the overlay controls. Gameplay solves this issue by installing a lightweight system service during installation that monitors for startup applications when a user logs in. When one of the client applications of interest starts, Gameplay is then able to hook in as a parent to the process insuring the overlay controls are displayed as intended.

Lessons Learned

Mouse filtering

During development, several game titles were discovered that incorrectly processed virtual mouse input received from the touch screen. This problem largely manifested with First Person Shooter titles or Role Playing Titles that have a "mouse-look" feature. The issue was that the mouse input received from the touch panel was absolute with respect to a point on the display, and thus in the game environment. This made the touch screen almost useless as a "mouse-look" device. The eventual fix was to filter out the mouse inputs by intercepting the input thread for the game. This allowed Gameplay to emulate mouse input via an on-screen control such as a joystick for the "mouse-look" function. It took a while to tune the joystick responsiveness and dead zone to feel like a mouse, but once that was done, everything worked beautifully. You can see this fix in action on games like Fallout: New Vegas or The Elder Scrolls: Skyrim.

Vetting titles for touch gaming

Ideum spent significant amounts of time tuning the virtual controllers for optimal gameplay. There are several elements of a game that determine its suitability for using with Gameplay. Below are some general guidelines that were developed for what types of games work well with Gameplay:

Gameplay playability by game type

GoodBetterBest

Role Playing Games (RPG)

Simulation

Fighting

Sports

Racing

Puzzles

Real Time Strategy (RTS)

Third Person Shooters

Platformers

Side Scrollers

Action and Adventure

While playability is certainly an important aspect of vetting a title for use with Gameplay, the most important criteria is stability. Some titles will just not work with either the hooking technique, input injection, or overlay technology. This can happen for a variety of reasons, but most commonly is due to the game title itself monitoring its own memory space or input thread to check for tampering. While Gameplay itself is a completely legitimate application, it employs techniques that can also be used for the forces of evil, so unfortunately some titles that are sensitive to these techniques will never work unless enabled for touch natively.

User Response

While still early in its release, Gameplay 1.0 has developed some interesting user feedback in regards to touch gaming on a PC. There are already some clear trends to the user feedback being received. At a high-level, it is clear that everyone universally loves being able to customize the touch interface for games. The remaining feedback focuses on personalizing the gaming experience in a few key areas:

  • Many virtual controllers are not ideal for left handed people, this was an early change to many of the published virtual controllers.
  • Button size and position is the most common change, so much so, that Ideum is considering adding an automatic hand sizing calibration in a future Gameplay release.
  • Many users prefer rolling touch inputs vs. discrete touch and release interaction.

We expect many more insights to reveal themselves as the number of user created virtual controllers increases.

Conclusion

GestureWorks Gameplay brings touch controls to your favorite games. It does this via a combination of a visual overlay and supports additional interactions like gesture, accelerometers, and 2-1 transitions. What has been most interesting in working on this project has been the user response. People are genuinely excited about touch-gaming on PCs, and ecstatic they can now play many of the titles they previously enjoyed with touch.

About Erik

Erik Niemeyer is a Software Engineer in the Software & Solutions Group at Intel Corporation. Erik has been working on performance optimization of applications running on Intel microprocessors for nearly fifteen years. Erik specializes in new UI development and micro-architectural tuning. When Erik is not working he can probably be found on top of a mountain somewhere. Erik can be reached at erik.a.niemeyer@intel.com.

About Chris

Chris Kirkpatrick is a software applications engineer working in the Intel Software and Services Group supporting Intel graphics solutions on mobile platforms in the Visual & Interactive Computing Engineering team. He holds a B.Sc. in Computer Science from Oregon State University. Chris can be reached at chris.kirkpatrick@intel.com.

Resources

https://gameplay.gestureworks.com/

http://software.intel.com/en-us/articles/detecting-slateclamshell-mode-screen-orientation-in-convertible-pc

 

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.

Copyright © 2014 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

Fast Panorama Stitching

$
0
0

 

Download paper as PDF

Introduction

Taking panoramic pictures has become a common scenario and is included in most smartphones’ and tablets’ native camera applications. Panorama stitching applications work by taking multiple images, algorithmically matching features between images, and then blending them together. Most manufacturers use their own internal methods for stitching that are very fast. There are also a few open source alternatives.

For more information about how to implement panorama stitching as well as a novel dual camera approach for taking 360 panoramas, please see my previous post here: http://software.intel.com/en-us/articles/dual-camera-360-panorama-application. In this paper we will do a brief comparison between two popular libraries, then go into detail on creating an application that can stitch images together quickly.

OpenCV* and PanoTools*

Two of the most popular open source stitching libraries are OpenCV and PanoTools. We initially started working with PanoTools—a mature stitching library available on Windows*, Mac OS*, and Linux*. It offers many advanced features and consistent quality. The second library we looked at is OpenCV. OpenCV is a very large project consisting of many different image manipulation libraries and has a massive user base. It is available for Windows, Mac OS, Linux, Android*, and iOS*. Both of these libraries come with sample stitching applications. The sample application with PanoTools completed our workload in 1:44. The sample application with OpenCV completed in 2:16. Although PanoTools was initially faster, we chose to use the OpenCV sample as our starting point due to its large user base and availability on mobile platforms.

Overview of Initial Application and Test Scenario

We will be using OpenCV’s sample application “cpp-example-stitching_detailed” as a starting point. The application goes through the stitching pipeline, which consists of multiple distinct stages. Briefly, these stages are:

  1. Import images
  2. Find features
  3. Pairwise matching
  4. Warping images
  5. Compositing
  6. Blending

For testing, we used a tablet with an Intel® Atom™ quad-core SoC Z3770 with 2GB of RAM running Windows 8.1. Our workload consisted of stitching together 16 1280x720 resolution images. 

Multithreading Feature Finding Using OpenMP*

Most of the stages in the pipeline consist of repeated work that is done on images that are not dependent on each other. This makes these stages good candidates for multithreading. All of these stages use a “for” loop, which makes it very easy for us to use OpenMP to parallelize these blocks of code.

The first stage we will parallelize is feature finding. First add the OpenMP compiler directive above the for loop:

#pragma omp parallel for
for (int i = 0; i < num_images; ++i)

The loop will now execute multithreaded; however, in the loop we are setting the values for the variables “full_img” and “img”. This will cause a race condition and will affect our output. The easiest way to solve this problem is to convert the variables into vectors. We should take these variable declarations:

Mat full_img, img;

and change them to:

vector<Mat> full_img(num_images);
vector<Mat> img(num_images);

Now within the loop, we will change each occurrence of each variable to its new name.

full_img becomes full_img[i]
img becomes img[i]

The content loaded in full_img and img is used later within the application, so to save time we will not release memory. Remove these lines:

full_img.release();
img.release();

Then we can remove this line from the composting stage:

full_img = imread(img_names[img_idx]);

full_img is referred to again during scaling within the composting loop. We will change the variable names again: 

full_img becomes full_img[img_idx]

img becomes img[img_idx]

Now the first loop is parallel. Next, we will parallelize the warping loop. First, we can add the compiler directive to make the loop parallel:

#pragma omp parallel for
for (int i = 0; i < num_images; ++i)

This is all that is needed to make the loop parallel; however, we can optimize this section a bit more.

There is a second for loop directly after the first one. We can move the work from it into the first loop to reduce the number of threads launched. Move this line into the first for loop:

images_warped[i].convertTo(images_warped_f[i], CV_32F);

We must also move the variable definition for images_warped_f to above the first for loop:

vector<Mat> images_warped_f(num_images);


Now we can parallelize the composting loop. Add the compiler directive in front of the for loop:

#pragma omp parallel for
for (int img_idx = 0; img_idx < num_images; ++img_idx)

Now the third loop is parallelized. After these changes we were able to run our workload in 2:08, an 8 second decrease. 

Optimizing Pairwise Matching Algorithm

Pairwise feature matching is implemented in such a way that it matches each image with every other image and results in O(n^2) scaling. This is unnecessary if we know the order that our images go in. We should be able to rewrite the algorithm so that each image is only compared against the adjacent images sequentially.

We can do this by changing this block:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_gpu, match_conf);
matcher(features, pairwise_matches);
matcher.collectGarbage();

to this:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(try_gpu, match_conf);
Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)
{
    matchMask.at<char>(i,i+1) =1;
}
matcher(features, pairwise_matches,matchMask);
matcher.collectGarbage();

This change brought our execution time down to 1:54, a 14 second decrease. Note that the images must be imported in sequential order.

Optimizing Parameters

There are a number of options available that will change the resolution at which we match and blend our images together. Due to our improved matching algorithm, we increased our tolerance for error and can lower some of these parameters to significantly decrease the amount of work we are doing.

We changed these default parameters:

double work_megapix = 0.6;
double seam_megapix = 0.1;
float conf_thresh = 1.f;
string warp_type = "spherical";
int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
string seam_find_type = "gc_color";

To this:

double work_megapix = 0.08;
double seam_megapix = 0.08;
float conf_thresh = 0.5f;
string warp_type = "cylindrical";
int expos_comp_type = ExposureCompensator::GAIN;
string seam_find_type = "dp_colorgrad";

After changing these parameters, our workload completed in 0:22, a decrease of 1:40. This decrease mostly comes from reducing work_megapix and seam_megapix. Time is reduced because we are now doing matching and stitching on very small images. This does decrease the amount of distinguishing features that can be found and matched, but due to our improved matching algorithm, we do not need to be as precise.

Removing Unnecessary Work

Within the compositing loop, there are two blocks of code that do not need to be repeated because we are using same sized images. These are for resizing mismatched images and initializing blending. These blocks of code can be moved directly in front of the compositing loop:

if (!is_compose_scale_set)
{
    …               
}

if (blender.empty())
{
    …
}

Note that there is one line in the warping code where we should change full_img[img_idx] to full_img[0]. By making this change, we were able to complete our workload in 0:20, a decrease of 2 seconds.

Relocation of Feature Finding

We did make one more modification, but the details of implementing this improvement depend on the context of the stitching application. In our situation, we were building an application to capture images and then stitch the images immediately after all the images were captured. If your case is similar to this, it is possible to relocate the feature finding portion of the pipeline into the image acquisition stage of your application. To do this, you should run the feature finding algorithm immediately after acquiring the image, then save the data to be ready when needed. In our experiments this removed approximately 20% from the stitching time, which in this case brings our total stitching time down to 0:16.

Logging

Logging isn’t enabled initially, but it is worth noting that turning on logging results in a performance decrease. We found that by enabling logging, there was a 10% increase in stitching time. It is important to turn logging off in the final version of the application.

Conclusion

With the popularity of panorama capture applications on mobile platforms it is important to have a fast, open source way to stitch images quickly. By decreasing the stitching time we are able to provide a quality experience to the end user. All of these modifications bring the total stitching time from 2:18 to 0:16, an 8.5x performance increase. This table shows the breakdown:

Modification

Time Decrease

Multithreading with OpenMP*

0:08

Pairwise Matching Algorithm

0:14

Optimize Initial Parameters

1:40

Remove Unnecessary Work

0:02

Feature Finding Relocation

0:04

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Configurations: Intel® Atom™ quad-core SoC Z3770 with 2GB of RAM running Windows 8.1. For more information go to http://www.intel.com/performance

 

 

 

Intel, the Intel logo, and Atom are trademarks of Intel Corporation in the U.S. and/or other countries.

Copyright © 2014 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

 

How To Plan Optimizations with Unity*

$
0
0

By John Wesolowski

Downloads

How To Plan Optimizations with Unity* [PDF 2.15MB]


Abstract

Unity provides a number of tools and settings to help make games perform smoothly. For this project, we chose ones we thought could prove to be troublesome and analyzed how they affected game performance on Intel® graphics processors.

We put ourselves in the shoes of a game developer learning how to use Unity. We wanted to stumble into performance pitfalls and then determine how to work through issues with Unity’s built-in performance mechanisms. One of Unity’s strengths is the ability to create content quickly, but when considering performance, especially on mobile and tablet devices, the developer needs to slow down and plan out how to utilize the built in performance mechanisms. This paper prepares new and existing Unity users with performance considerations when building your levels/games, and offers new ways to build.


Introduction

Creating games within Unity is fairly simple. Unity offers a store where you can purchase items like meshes, pre-written scripts, game demos, or even full games. For the purposes of my testing, I was concerned with manipulating an existing game to find areas where performance gains could or could not be achieved. I dove into the Unity Tech Demo called Boot Camp, free for download in the assets store, to see what kind of trouble I could get into.

I used Unity 3.0 to create the game settings and run all of the scenes. The testing was performed on a 3rd generation Intel® Core™ processor-based computer with Intel® HD Graphics 4000. The test results are not applicable to mobile devices.

Quality Manager

Unity has extra render settings for games found in: Edit->Project Settings->Quality menu (Figure 1). They are customizable render settings that can be modified for individual needs. Unity has helpful online documentation for explaining what the Quality Settings are and how to modify these settings through Unity’s scripting API.


Figure 1. The Tags and Layers available through the Edit->Project Settings->Tag inspector

As for my task to find optimizations with Unity, I decided to mess around with some of the Quality Settings to see what kind of gains or losses I could find, although I did not test all of the different options available.

Texture Quality

The Quality Settings Inspector has a drop down menu where you select render resolutions for your textures. You can choose from 1/8, ¼, ½, or Full Resolution. To see the performance gains/losses between different texture resolutions, I took frame rate captures of a sample scene, testing all of Unity’s default Quality Settings (Fastest, Fast, Good, etc.), while adjusting only the Texture Quality between each capture. Figures 2 and 3 show a comparison between a scene with 1/8 Texture Resolution and Full Resolution.


Figure 2. Unity* Scene Boot Camp running at 1/8 resolution


Figure 3. Unity* Scene Boot Camp running at full resolution

We took a frames per second (FPS) capture using Intel® Graphics Performance Analyzers (Intel® GPA) after changing the texture resolution. Looking at the Fantastic setting (Table 1), you can see the performance did not change much by varying the texture sizes.


Table 1. Illustrates the change in FPS while switching between Unity’s* provided texture qualities

Although an Intel® graphics-based PC’s performance is not affected by texture size changes, there are other things to consider, like the total amount of memory on the device and its usage by the application.

Shadow Distance

 

Shadow distance is a setting that changes the culling distance of the camera being used for the shadows of game objects. Game objects within the shadow distance’s value from the camera have their shadows sent for rendering, whereas objects that are not within the shadow distance value do not have their shadows drawn.

Depending on the settings used, shadows can adversely affect performance due the amount of processing they require. To test the impact of Shadow Distance:

  • Set up a sample scene
  • Set scene to a Unity default quality setting
  • Adjust the shadow distance incrementally and take FPS captures using Intel GPA
  • Select different Unity default quality settings and repeat shadow distance captures

This test did not use the Fastest and Fast Quality Levels because those default to turning shadows off.


Figure 4. This is a setting found under the Inspector menu of Edit->Project Settings->Quality


Figure 5. Unity* Tech Demo Boot Camp


Table 2. FPS results from changing the Shadow Distance of Unity* Tech Demo, Boot Camp

Shadows significantly impact performance. The data shows the FPS dropped by almost half when going from a distance of 0 to 50 Simple mode. It is important to consider if game objects can actually be seen and to make sure you are not drawing shadows unnecessarily. The shadow distance and other shadow settings can be controlled during gameplay via Unity scripting and can accommodate numerous situations. Although we only tested the effects of shadow distance, we expect similar performance deltas occur when changing the other settings under Shadow in the Quality settings.

Layers

All game objects inside Unity are assigned to a layer upon creation. They are initially assigned to the default layer, as show in Figure 6, but you can create your own unique layers. There are two ways to do this. You can simply click on the box next to Layer and select Add New Layer. You can also go to Edit->Project Settings->Tags.


Figure 6. The Layer menu found inside the Inspector of a game object


Figure 7. The Tag Manager via the inspector menu

From the inspector window (Figure 7) you can create a new layer and specify which layer number you want it to belong to. Both methods lead you to the same Tag Manager window. Once a layer is created, game objects can be assigned to them by choosing the desired layer under the options box next to Layer under that game object’s inspector window. This way, you can group objects in common layers for later use and manipulation. Keep in mind what layers are and how to create and modify them for when I talk about a few other layer features later in the paper.

Layer Cull Distances

Your camera will not render game objects beyond the camera’s clipping plain in Unity. There is a way, through Unity scripting, to have certain layers set to a shorter clipping plane.


Figure 8. Sample script taken from Unity*’s Online Documentation showing how to modify a layer’s culling distance

It takes a bit of work to set up game objects so they have a shorter culling distance. First, place the objects onto a layer. Then, write a script to modify an individual layer’s culling distance and attach it to the camera. The sample script in Figure 8 shows how a float array of 32 is created to correspond to the 32 possible layers available for creation under the Edit->Project Settings->Tags. Modifying a value for an index in your array and assigning it to camera.layerCullDistances will change the culling distance for the corresponding layer. If you do not assign a number for an index, the corresponding layer will use the camera’s far clip plane.

To test performance gains from layerCullDistances, I set up three scenes filled with small, medium, and large objects in terms of complexity. The scenes were arranged with a number of identical game objects grouped together and placed incrementally further and further away from the camera. I used Intel GPA to take FPS captures while incrementing the layer culling distance each time, adding another group of objects to the capture, i.e., the first capture had one group of objects, whereas the sixth capture had six groups of objects.

Figures 9, 10, and 11 show the scenes I used for testing with the different types of objects.

Boots: Poly – 278 Vertices – 218

Figure 9. Test scene filled with low polygon and vertices count boot objects

T-Rex’s: Poly – 4398 Vertices – 4400

Figure 10. Test scene filled with medium polygon and vertices count dinosaur objects

Airplane: Poly - 112,074 Vertices - 65,946

Figure 11. Test scene filled with large polygon and vertices count airplane objects

Tables 3, 4, and 5 show the change in FPS for each of the scenes tested.


Table 3. Data collected from the scene with boots (Figure 9)


Table 4. Data collected from the scene with dinosaurs (Figure 10)


Table 5. Data collected from the scene with airplanes (Figure 11)


Table 6. Fantastic mode data from all the test scenes

This data shows that performance gains can be achieved from using the layerCullDistances feature within Unity.

Table 6 illustrates how having more objects on the screen impacts performance, especially with complex objects. As a game developer, using the layerCullDistances proves to be very beneficial for performance if utilized properly. For example, smaller objects with a complex mesh that are farther away from the camera can be set up to only draw when the camera is close enough for the objects to be distinguished. While planning and designing a level, the developer needs to consider things like mesh complexity and the visibility of objects at a greater distance from the camera. By planning ahead, you can achieve greater benefits from using layerCullDistances.

Camera

I explored Unity’s camera, focusing on its settings and features. I toyed with some of the options under its GUI and examined other features and addons.


Figure 12. The Inspector menu that appears while having a camera selected

When creating a new scene, by default, there is only one camera game object labeled Main Camera. To create or add another camera, first create an empty game object by going to: Game Object->Create Empty. Then select the newly created empty object and add the camera component: Components->Rendering->Camera.

Unity’s camera comes with a host of functionality inside its GUI, as shown in Figure 12. The features I chose to explore were: Rendering Path and HDR.

Render Path

The Render Path tells Unity how to handle light and shadow rendering in the game. Unity offers three render types, listed from highest cost to least; Deferred (Pro Only), Forward, and Vertex Lit rendering. Each renderer handles light and shadow a little bit differently, and they require different amounts of CPU and GPU processing power. It’s important to understand the platform and hardware you want to develop for so you can choose a renderer and build your scene or game accordingly. If you pick a renderer that is not supported by the graphics hardware, Unity will automatically lower the rendering path to a lower fidelity.


Figure 13. Player Settings Inspector window

The Rendering Path can be set in two different ways. The first is under the Edit->Project Settings->Player (Figure 13). You will find the Rendering Path drop down box under the Others Settings tab. The second is from the Camera Inspector GUI (Figure 14). Choosing something other then ‘Use Player Settings’ will override the rendering path set in your player settings, but only for that camera. So it is possible to have multiple cameras using different rendering buffers to draw the lights and shadows.


Figure 14. The drop down box from selecting the Rendering Path under the Camera GUI

Developers should know that these different light rendering paths are included in Unity and how each handles rendering. The reference section at the end of this document has links to Unity’s online documentatoin. Make sure you know your target audience and what type of platform they expect their game to be played on. This knowledge will help you select a rendering path appropriate to the platform. For example, a game designed with numerous light sources and image effects that uses deferred rendering could prove to be unplayable on a computer with a lower end graphics card. If the target audience is a casual gamer, who may not possess a graphics card with superior processing power, this could also be a problem. It is up to developers to know the target platform on which they expect their game to be played and to choose the lights and rendering path accordingly.

HDR (High Dynamic Range)

In normal rendering, each pixel’s red, blue, and green values are represented by a decimal number between 0 and 1. By limiting your range of values for the R, G, and B colors, lighting will not look realistic. To achieve a more naturalistic lighting effect, Unity has an option called HDR, which when activated, allows the number values representing the R, G, and B of a pixel to exceed their normal range. HDR creates an image buffer that supports values outside the range of 0 to 1, and performs post-processing image effects, like bloom and flares. After completing the post-processing effects, the R, G, and B values in the newly created image buffer are reset to values within the range of 0 to 1 by the Unity Image Effect Tonemapping. If Tonemapping is not executed when HDR is included, the pixels could be out of the normal accepted range and cause some of the colors in your scene to look wrong in comparison to others.

Pay attention to a few performance issues when using HDR. If using Forward rendering for a scene, HDR only will be active if image effects are present. Otherwise, turning HDR on will have no effect. Using Deferred rendering supports HDR regardless.

If a scene is using Deferred rendering and has Image Effects attached to a camera, HDR should be activated. Figure 15 compares the draw calls for a scene with image effects and deferred rendering while HDR is turned on and HDR is off. With HDR off and image effects included, you see a larger number of draw calls then if you include image effects with HDR turned on. In Figure 15, the number of draw calls are represented by the individual blue bars, and the height of each blue bar reveals the amount of GPU time each draw call took.


Figure 15. The capture from Intel® Graphics Performance Analyzers with HDR OFF shows over 2000 draw call, whereas the capture with HDR ON has a little over 900 draw calls.

Read over Unity’s HDR documentation and understand how it affects game performance. You should also know when it makes sense to use HDR to ensure you are receiving its full benefits.

Image Effects

Unity Pro comes with a range of image effects that enhance the look of a scene. Add Image Effects assets, even after creating your project, by going to Assets->Import Package->Image Effects. Once imported, there are two ways to add an effect to the camera. Click on your camera game object, then within the camera GUI, select Add Component, then Image Effects. You can also click on your camera object from the menu system by going to Component->Image Effect.

SSAO – Screen Space Ambient Occlusion

 

Screen space ambient occlusion (SSAO) is an image effect included in Unity Pro’s Image Effect package. Figure 16 shows the difference between a scene with SSAO off and on. The images look similar, but performance is markedly different. The scene without SSAO ran at 32 FPS and the scene with SSAO ran at 24 FPS, a 25% decrease.


Figure 16. A same level comparison with SSAO off (top) vs. SSAO on (bottom)

Be careful when adding image effects because they can negatively affect performance. For this document we only tested the SSAO image effect but expect to see similar results with the other image effects.

Occlusion Culling

Occlusion Culling disables object rendering not only outside of the camera’s clipping plane, but for objects hidden behind other objects as well. This is very beneficial for performance because it cuts back on the amount of information the computer needs to process, but setting up occlusion culling is not straightforward. Before you set up a scene for occlusion culling, you need to understand the terminology.

    Occluder– An object marked as an occluder acts as a barrier that prevents objects marked as occludees from being rendered.

    Occludee– Marking a game object as an occludee will tell Unity not to render the game object if blocked by an occluder.

For example, all of the objects inside a house could be tagged as occludees and the house could be tagged as an occluder. If a player stands outside of that house, all the objects inside marked as occludees will not be rendered. This saves CPU and GPU processing time.

Unity documents Occlusion Culling and its setup. You can find the link for setup information in the references section.

To show the performance gains from using Occlusion Culling, I set up a scene that had a single wall with highly complex meshed objects hidden behind. I took FPS captures of the scene while using Occlusion Culling and then without it. Figure 17 shows the scene with the different frame rates.


Figure 17. The image on the left has no Occlusion Culling so the scene takes extra time to render all the objects behind the wall resulting in an FPS of 31. The image on the right takes advantage of Occlusion Culling so the objects hidden behind the wall will be rendered resulting in an FPS of 126.

Occlusion culling requires developers to do a lot of manual setup. They need to also consider occlusion culling during game design as to make the game’s configuration easier and performance gains greater.

Level of Detail (LOD)

Level of Detail (LOD) allows multiple meshes to attach to a game object and provides the ability to switch between meshes the object uses based on camera distance. This can be beneficial for complex game objects that are really far away from the camera. The LOD can automatically simplify the mesh to compensate. To see how to use and setup LOD, check out Unity’s online documentation. The link to it is in the reference section.

To test the performance gains from LOD, I built a scene with a cluster of houses with 3 different meshes attached to them. While standing in the same place, I took an FPS capture of the houses when the most complex mesh was attached. I then modified the LOD distance so the next lesser mesh appeared, and took another FPS capture. I did this for the three mesh levels and recorded my findings as shown in Table 5.

Figures 18, 19, and 20 show the three varying levels of mesh complexity as well as the number of polygons and vertices associated with each mesh.

 Best Quality – LOD 0
Building A
  • Vert – 7065
  • Poly – 4999
Building B
  • Vert - 5530
  • Poly – 3694


Figure 18. LOD level 0. This is the highest LOD level that was set with the
more complex building meshes

 Medium Quality – LOD 1
Building A
  • Vert – 6797
  • Poly – 4503
Building B
  • Poly - 5476
  • Vert – 3690


Figure 19. LOD level 1. The next step on the LOD scale; this level was set with
the medium complexity meshes

   Low Quality – LOD 2
Building A
  • Vert – 474
  • Poly – 308
Building B
  • Poly - 450
  • Vert – 320


Figure 20. LOD level 2.This LOD level was the last one used and contained
the least complex meshes for the buildings

As I switched between the different LOD models, I took FPS captures for comparison (Table 7).


Table 7. LOD FPS comparison switching between lower model meshes

Table 7 shows the increased performance gains from setting up and using LOD. The FPS capture shows significant performance gains when using lower quality meshes. This however, can take a lot of extra work on the 3-D artists, who must produce multiple models. It is up to the game designer to decide whether or not spending the extra time for more models is worth the performance gains.

Batching

Having numerous draws calls can cause overhead on the CPU and slow performance. The more objects on the screen, the more draw calls to be made. Unity has a feature called Batching that combines game objects in to a single draw call. Static Batching affects static objects, and Dynamic Batching is for those that move. Dynamic Batching happens automatically, if all requirements are met (see batching documentation), whereas Static Batching needs to be created.

There are some requirements for getting the objects to draw together for both Dynamic and Static Batching, all of which are covered in Unity’s Batching document listed in the references section.

To test the performance gains of Static Batching, I set up a scene with complex airplane game objects (Figure 21) and took FPS captures of the airplanes both with batching and without batching (Table 8).


Figure 21. Static Batching Test scene filled with very complex airplane meshes


Table 8. Showing the difference between FPS and Draw Calls while turning static batching on and off for the test scene (Figure 21)

Unity’s batching mechanism comes in two forms, Dynamic and Static. To fully see the benefits from batching, plan to have as many objects as possible batched together for single draw calls. Refer to Unity’s batching documentation and know what qualifies an object for dynamic or static batching.

Conclusion

While Unity proves to be fairly simple to pick up and develop with, it can also be very easy to get yourself into performance trouble. Unity provides a number of tools and settings to help make games perform smoothly, but not all of them are as intuitive and easy to set up as others. Likewise, Unity has some settings that when turned on or used inappropriately can negatively affect game performance. An important part of developing with Unity is to have a plan before starting because some of the performance features require manual setup and can be much more challenging to implement if not planned at the project’s creation.

References

Quality Settings Documentation:
http://docs.unity3d.com/Documentation/Components/class-QualitySettings.html

Quality Settings Scripting API:
http://docs.unity3d.com/Documentation/ScriptReference/QualitySettings.html

Tech Demo Bootcamp:
http://u3d.as/content/unity-technologies/bootcamp/28W

Level of Detail Documentation:
http://docs.unity3d.com/Documentation/Manual/LevelOfDetail.html

Occlusion Culling Documentation:
http://docs.unity3d.com/Documentation/Manual/OcclusionCulling.html

Batching Documentation:
http://docs.unity3d.com/Documentation/Manual/DrawCallBatching.html

Rendering Path Documentation:
http://docs.unity3d.com/Documentation/Manual/RenderingPaths.html

Intel GPA:
http://software.intel.com/en-us/vcsource/tools/intel-gpa

Other Related Content and Resources

Unity MultiTouch Source (finally)
http://software.intel.com/en-us/blogs/2013/05/01/the-unity-multi-touch-source-finally

Implementing Multiple Touch Gestures Using Unity3D With Touchscript
http://software.intel.com/en-us/articles/implementing-multiple-touch-gestures-using-unity-3d-with-touchscript

Multithreading Perceptual Computing Applications in Unity3D
http://software.intel.com/en-us/blogs/2013/07/26/multithreading-perceptual-computing-applications-in-unity3d

Unity3D Touch GUI Widgets
http://software.intel.com/en-us/articles/unity-3d-touch-gui-widgets

About the Author

John Wesolowski, Intern
The focus of the group that I worked for at Intel was to enable Intel® chipsets for upcoming technology, with a focus on video games. It was our task to test the latest and upcoming video games to find potential bugs or areas of improvement inside the Intel® architecture or in the video game.

Outside of work, my all-time favorite activity used to be playing Halo* 2 online with my friends but since Microsoft shut down all Xbox LIVE* service for original Xbox* games, my friends and I like to LAN Halo 2 whenever we can. I also enjoy playing poker and flying kites. I am currently attending California State University, Monterey Bay and pursuing a degree in Computer Science and Information Technology.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Debugging Intel® Xeon Phi™ Applications on Windows* Host

$
0
0

Contents

Introduction

Intel® Xeon Phi™ coprocessor is a product based on the Intel® Many Integrated Core Architecture (Intel® MIC). Intel® offers a debug solution for this architecture that can debug applications running on an Intel® Xeon Phi™ coprocessor.

There are many reasons for the need of a debug solution for Intel® MIC. Some of the most important ones are the following:

  • Developing native Intel® MIC applications is as easy as for IA-32 or Intel® 64 hosts. In most cases they just need to be cross-compiled (/Qmic).
    Yet, Intel® MIC Architecture is different to host architecture. Those differences could unveil existing issues. Also, incorrect tuning for Intel® MIC could introduce new issues (e.g. alignment of data, can an application handle more than hundreds of threads?, efficient memory consumption?, etc.)
  • Developing offload enabled applications induces more complexity as host and coprocessor share workload.
  • General lower level analysis, tracing execution paths, learning the instruction set of Intel® MIC Architecture, …

Debug Solution for Intel® MIC

For Windows* host, Intel offers an own debug solution, the Intel® Debugger Extension for Intel® MIC Architecture Applications. It supports debugging offload enabled application as well as native Intel® MIC applications running on the Intel® Xeon Phi™ coprocessor.

How to get it?

To obtain Intel’s debug solution for Intel® MIC Architecture on Windows* host, you need the following:

Debug Solution as Integration

Debug solution from Intel® based on GNU* GDB 7.5:

  • Full integration into Microsoft Visual Studio*, no command line version needed
  • Available with Intel® Composer XE 2013 SP1 and later


Why integration into Microsoft Visual Studio*?

  • Microsoft Visual Studio* is established IDE on Windows* host
  • Integration reuses existing usability and features
  • Fortran support added with Intel® Fortran Composer XE

Components Required

The following components are required to develop and debug for Intel® MIC Architecture:

  • Intel® Xeon Phi™ coprocessor
  • Windows* Server 2008 RC2, Windows* 7 or later
  • Microsoft Visual Studio* 2012 or later
    Support for Microsoft Visual Studio* 2013 was added with Intel® Composer XE 2013 SP1 Update 1.
  • Intel® MPSS 3.1 or later
  • C/C++ development:
    Intel® C++ Composer XE 2013 SP1 for Windows* or later
  • Fortran development:
    Intel® Fortran Composer XE 2013 SP1 for Windows* or later

Configure & Test

It is crucial to make sure that the coprocessor setup is correctly working. Otherwise the debugger might not be fully functional.

Setup Intel® MPSS:

  • Follow Intel® MPSS readme-windows.pdf for setup
  • Verify that the Intel® Xeon Phi™ coprocessor is running

Before debugging applications with offload extensions:

  • Use official examples from:
    C:\Program Files (x86)\Intel\Composer XE 2013 SP1\Samples\en_US
  • Verify that offloading code works

Prerequisite for Debugging

Debugger integration for Intel® MIC Architecture only works when debug information is being available:

  • Compile in debug mode with at least the following option set:
    /Zi (compiler) and /DEBUG (linker)
  • Optional: Unoptimized code (-Od) makes debugging easier
    (due to removed/optimized away temporaries, etc.)

Applications can only be debugged in 64 bit

  • Set platform to x64
  • Verify that /MACHINE:x64 (linker) is set!

Debugging Applications with Offload Extension

Start Microsoft Visual Studio* IDE and open or create an Intel® Xeon Phi™ project with offload extensions. Examples can be found in the Samples directory of Intel® Composer XE, that is:

C:\Program Files (x86)\Intel\Composer XE 2013 SP1\Samples\en_US

  • C++\mic_samples.zip    or
  • Fortran\mic_samples.zip

We’ll use intro_SampleC from the official C++ examples in the following.

Compile the project with Intel® C++/Fortran Compiler.

Characteristics of Debugging

  • Set breakpoints in code (during or before debug session):
    • In code mixed for host and coprocessor
    • Debugger integration automatically dispatches between host/coprocessor
  • Run control is the same as for native applications:
    • Run/Continue
    • Stop/Interrupt
    • etc.
  • Offloaded code stops execution (offloading thread) on host
  • Offloaded code is executed on coprocessor in another thread
  • IDE shows host/coprocessor information at the same time:
    • Breakpoints
    • Threads
    • Processes/Modules
    • etc.
  • Multiple coprocessors are supported:
    • Data shown is mixed:
      Keep in mind the different processes and address spaces
    • No further configuration needed:
      Debug as you go!

Setting Breakpoints

Debugging Applications with Offload Extension - Setting Breakpoints

Note the mixed breakpoints here:
The ones set in the normal code (not offloaded) apply to the host. Breakpoints on offloaded code apply to the respective coprocessor(s) only.
The Breakpoints window shows all breakpoints (host & coprocessor(s)).

Start Debugging

Start debugging as usual via menu (shown) or <F5> key:
Debugging Applications with Offload Extension - Start Debugging

While debugging, continue till you reach a set breakpoint in offloaded code to debug the coprocessor code.

Thread Information

Debugging Applications with Offload Extension - Thread Information

Information of host and coprocessor(s) is mixed. In the example above, the threads window shows two processes with their threads. One process comes from the host, which does the offload. The other one is the process hosting and executing the offloaded code, one for each coprocessor.

Debugging Native Coprocessor Applications

Pre-Requisites

Create a native Intel® Xeon Phi™ application and transfer & execute the application to the coprocessor target:

  • Use micnativeloadex.exe provided by Intel® MPSS for an application C:\Temp\mic-examples\bin\myApp, e.g.:

    > "C:\Program Files\Intel\MPSS\sdk\coi\tools\micnativeloadex\micnativeloadex.exe""C:\Temp\mic-examples\bin\myApp" -d 0
     
  • Option –d 0 specifies the first device (zero based) in case there are multiple coprocessors per system
  • This application is executed directly after transfer

Using micnativeloadex.exe also takes care about dependencies (i.e. libraries) and transfers them, too.

Other ways to transfer and execute native applications are also possible (but more complex):

  • SSH
  • NFS
  • FTP
  • etc.

Debugging native applications with Start Visual Studio* IDE is only possible via Attach to Process…:

  • micnativeloadex.exe has been used to transfer and execute the native application
  • Make sure the application waits till attached, e.g. by:
    
    		static int lockit = 1;
    
    		while(lockit) { sleep(1); }
    
    		
  • After having attached, set lockit to 0 and continue.
  • No Visual Studio* solution/project is required.

Only one coprocessor at a time can be debugged this way.

Configuration

Open the options via TOOLS/Options… menu:

Debugging Native Coprocessor Applications - Configuration

It tells the debugger extension where to find the binary and sources. This needs to be changed every time a different coprocessor native application is being debugged.

The entry solib-search-path directories works the same as for the analogous GNU* GDB command. It allows to map paths from the build system to the host system running the debugger.

The entry Host Cache Directory is used for caching symbol files. It can speed up lookup for big sized applications.

Attach

Open the options via TOOLS/Attach to Process… menu:

Debugging Native Coprocessor Applications - Attach to Process...

Specify the Intel(R) Debugger Extension for Intel(R) MIC Architecture. Set the IP and port the GDBServer is running on; the default port of the GDB-Server is 2000, so use that.

After a short delay the processes of the coprocessor card are listed. Select one to attach.

Note:
Checkbox Show processes from all users does not have a function for the coprocessor as user accounts cannot be mapped from host to target and vice versa (Linux* vs. Windows*).


Intel(R) Software Manager has stopped working

$
0
0

There have been some instances after Intel(R) Software Manager is launched that Microsoft Windows* will display "Intel Software Manager has stopped working"...

Image of Window stating Intel Software Manager has stopped working.

To resolve this, remove the Intel Software Manager user configuration files: C:\Users\<account name>\AppData\Local\Intel_Corporation\ism2.exe_*\*.*

New 10x10 Process Combines Participatory Design with Agile Product Development

$
0
0

By Garret Romaine

Download Article

New 10x10 Process Combines Participatory Design with Agile Product Development [PDF 588KB]


Traditional user experience (UX) testing is often a lengthy process, using dozens of test subjects and requiring considerable time both in preparations beforehand and analysis after the testing ends. This type of approach often works against Agile processes, as development teams have little time or incentive to provide guidance for the testing.

Enter Dr. Daria Loi, who oversees UX Innovation in the PC Client Group at Intel. Loi knew the traditional way of doing business was increasingly out of sync with the needs of a modern Agile team. So she came up with a new testing methodology she calls “10x10,” which limits the user population to 10 test subjects and restricts the testing cycle to 10 weeks. As Loi explained, “This process is different from the more traditional user experience process, where you develop a product to the prototype level, then heavily test with end users. Here, end users are enlisted as design partners from the very start—they participate in the design process. In fact, they have a weekly voice and consequently feel responsible for the end product. We empower them to be much more vocal with their insights.”

Hardware engineers and app developers can now get better and faster user input during the crucial early stages of a project. In the 10x10 testing methodology, teams work closely with end users over a short amount of time and rapidly gather useful information about the product as it evolves. Intel’s new process is too important to keep a secret—it’s worth sharing with all developers who want to build better products that end users can actually use.

Great Experiences are No Accident

Loi is committed to getting the user experience of Intel’s products right. Since the days of her first academic studies in architecture and industrial design, she’s been relentless in her quest for a better UX. Loi has always seen the UX role as crucial to the future of product and service development, and she is completely committed to continuously evolving her practice as necessary. “In my team I represent the ultimate voice of the end user for our key system designs,” she said. “As a UX practitioner, I have a great responsibility in this process and must evolve the way I do UX as needed. I must ensure we make the right decisions in the right time frame and context.”

In an ideal product development process, engineering, marketing and design teams work in a collaborative partnership based on continual interaction to streamline and accelerate the decision-making process. This partnership ideally relies on a solid understanding of the target audience, through ongoing input from UX and market researchers. The reality is different, however; on one hand a tight bond among all development teams is difficult to establish, and on the other hand, the time required to generate UX data is rarely aligned with the product-development schedule.

Loi has seen that the Agile development methodology is a great process to push the envelope with regard to fast iteration and solid teamwork, but end-user input has suffered. It’s hard to integrate users’ input into a team that is iterating on a short clock. Experts, like Loi, believe that Participatory Design (PD)—an established design practice where end users are directly involved in the co-design of the things and technologies they use—is a key to Agile success. With Loi’s new 10x10 process, Agile teams can experience a clear path forward that promises to improve on all of the key success factors.


Figure 1: UX testers often use notes to gather succinct feedback from test subjects and then group the comments to see if trends jump out.

A New Way Forward: 10x10

Loi and her team were recently instrumental in developing a number of reference designs for Ultrabook™ systems, including a convertible device (code-named Cove Point) based on the 3rd generation Intel® Core™ processor (Ivy Bridge), a reference design system (North Cape) with a detachable 1080p screen and 4th generation Intel® Core™ (Haswell) processor, and a detachable system with a 5th generation Intel® Core™ (Broadwell-Y) processor.

For all projects, Loi and her team sought faster integration of key findings, a tighter bond between the test subjects and the designers, engineers, and developers, and a better all-around product. As a result of this ongoing streamlining effort, Loi developed her 10x10 process to address increasingly aggressive project timeframes and crucial needs of her partner development teams.

In Loi’s 10x10 process, users were deeply involved in the development process, in a Participatory Design fashion. The actual testing took place weekly, with participants providing input on what the design and engineering teams worked out during the previous week. Loi and her team wrote up the results and reported the data as quickly as possible, in a cadence similar to the description given in Table 1.

Table 1.Daily tasks for reporting data during the 10x10 process.

 MondayTuesdayWednesdayThursdayFriday
TaskFinalize formal report. Share findings with all internal stakeholders.Answer final questions and close the books on last week’s testing.Prepare for the next round of testing, which starts Thursday.Conduct user testing.Review the high-level findings from Thursday’s testing with team members.

 

A key component of the process is the selection of the best target users. Loi recruited participants with diverse demographics according to a range of parameters (age, gender, income level, employment, family composition, and device ownership, among others). Additionally, each subject represented one of the target consumer segments for that specific product.

“For these projects, I needed end users to be my design partners, so I selected subjects who expressed themselves well,” she said. “I needed assertiveness in their articulation. You want people who won’t hold back or try to please you with their feedback. They must be able to explain clearly and uncompromisingly what they mean, need, dislike, want, and desire.”

Loi met with participants for two hours every Thursday and development teams were invited to attend these weekly design critique sessions, enabling them to receive real-time feedback. Then on Friday, Loi provided summaries for all the people who didn’t attend the session, focusing on areas where the team needed to act immediately, essentially writing up the high-level action items. By Monday, her team distributed the official report to all internal stakeholders, including the executive teams.

“There was a strong cadence,” Loi said. “End users had a weekly voice, and this quickly created an interesting phenomenon: a strong sense of personal empowerment and responsibility to the end product. Because of this, they became increasingly vocal in their insights, because they felt—they actually were—part of the team. It was a quite wonderful thing to see at work and be part of.”

In the end, the final design was substantially different from the original concept, according to Loi. Some of the initial choices did not work well, according to the end users. “Each time we got strong, unanimous feedback, we addressed it right away,” Loi said of the development team.

For example, one of the chassis designs favored by the design team was the least favorite of the test subjects. After putting it to the test a couple of times and getting similarly strong negative feedback, the project leaders decided to drop that design and move in another direction.

Similarly, the original hinge design sparked a number of complaints, so designers came up with changes to test and ratify. There were also advances with the screen angle, touchpad size, and keyboard design. Testing would reveal challenges, designers would come up with alternatives, and the test team would report back so that project leaders could make informed decisions.

Striking Results

Loi knew she was on to something when the word began to spread that the test sessions were consistently revealing actionable data points. “One lead designer found so much value in the sessions that he wanted to make attendance compulsory for the entire team,” Loi said. “Some tech companies have created programs where engineers are required to directly engage in periodical user testing for similar reasons. Anything that tightens the communication between the end user and those that design technology will pay off in the end.”

Within the context she usually operates, Loi said that her 10x10 process is a big step forward. “It’s the first time we used this approach at Intel, as far as I know. If we look at the literature, Participatory Design approaches are not yet the standard tradition for Agile methodologies. I have been an active member of the Participatory Design community since the late ’90s, and I know that although the debate around PD and Agile is lively in some communities, there are not as many case studies as I would like to see—especially within large-scale product development contexts.”

Because product-development timelines are incredibly aggressive, a traditional UX-testing cycle often ends up influencing only some decisions—by the time prototypes are ready for testing, many key decisions have already been made and it is simply too late to make significant changes.

With Loi’s approach, her teams get rich feedback on a weekly basis, as the product is being designed. This makes the development team accountable for incorporating requested changes, and it builds a data-driven condition where assumptions and opinions no longer count and are no longer smart to entertain. “This means that personal preferences have less relevance and are more difficult to assert,” Loi said. “It’s a scary concept for some creators, who like to push their own views of what a product should be and do.”

All in the Family

Critics might ask about the embedded nature of the test subjects, who become important parts of the development team. When the test subjects become such an important part of the team, the tendency could be for them to “go native” and start going along with designers who face tough engineering challenges to implement good design ideas.

Loi said the result has been the opposite. “When you are part of the family, something important happens: you care, a lot. Since we elevated the end users from being sources of information to actual partners, they became empowered and responsible for the end result. Over the weeks, I noticed a growth in focus, detailing, strength, and depth. They had a strong memory of what they said in each previous session and made us accountable when they felt we did not deliver. The nuances and richness of the feedback increased over time—the last few sessions were phenomenal in that sense,” she said. “Also, it is important to note that the 10x10 process is typically complemented by other UX research, including quantitative and longitudinal studies, to triangulate data and deepen findings.”

Since Loi and her team had such fantastic data to communicate, they didn’t mind the intense time commitment required to get the data into the hands of the rest of the team. The process was intense—but with the right tools and techniques, her team quickly documented what the test subjects were saying, with high quality and dramatic impact—and Loi didn’t have to sacrifice her weekends to get a report done by Monday.

But in the Agile world, even that wait must seem interminable for some on the team. Loi knows that there will be ongoing demands to compress that reporting component even more. That’s where her daily interactions with the development team came in handy. “You cannot be an effective UX practitioner otherwise. You MUST be on the core design and development team.”

Faster, Better, and Cheaper?

Just about everyone who toils at some level in the technology world knows that there is one absolute truism in business: when it comes to faster, better, and cheaper, you get to pick two. Because the emphasis on speed is not going to change anytime soon, testing will have to innovate faster, think faster, and even fail faster.

Loi said there shouldn’t be any doubt that Intel is strongly committed to continually improving the user experience. From her perspective, that commitment expresses itself at multiple levels of the organization, from the number of UX practitioners present in the company, to the way in which designers, engineers, and developers talk about their products.

To Loi, it has been an incredible journey of evolution and progression from a UX perspective. Engineers who might have balked at slowing down for user testing now embrace the process and act as advocates. They now come to Loi and request testing. She has seen a major turnaround that she’s proud to be a part of. But self-congratulation isn’t part of her DNA.

“Once UX is embraced at the highest level of the company, there are huge implications from any perspective. Luckily, we have a lot of good practitioners. Also, most executive teams have now actively embraced user experience and routinely advocate for our work. This means a lot because it's not easy to have buy-in at such high levels. So it's pretty phenomenal, but there is still a lot of work to do. You can always do better.”

Related Content

The Human Touch: Building Ultrabook™ Applications in a Post-PC Age
Pointing the Way: Designing a Stylus-driven Device in a Mobile World
How Multi-Region User Experience Influences Touch on Ultrabook

 

Intel, the Intel logo, Intel Core, Look Inside, The Look Inside logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2014. Intel Corporation. All rights reserved.

PERCEPTUAL COMPUTING: Augmenting the FPS Experience

$
0
0

Downloads

PERCEPTUAL COMPUTING: Augmenting the FPS Experience [PDF 977KB]


1. Introduction

For more than a decade, we've enjoyed the evolution of the First Person Shooter (FPS) Genre, looking at games through the eyes of the protagonist and experiencing that world first hand. To exercise our control, we've been forced to communicate with our avatar through keyboard, mouse, and controllers to make a connection with that world. Thanks to Perceptual Computing, we now have additional modes of communication that bring interaction with that world much closer. This article not only covers the theory of perceptual controls in FPS games, but demonstrates actual code that allows the player to peek around corners by leaning left or right. We will also look at using voice control to select options in the game and even converse with in-game characters.

A familiarity with the Intel® Perceptual Computing SDK is recommended but not essential, and although the code is written in Dark Basic Professional (DBP), the principals are also suited to C++, C#, and Unity*. The majority of this article will cover the theory and practise of augmenting the First Person experience and is applicable not only to games but simulations, tours, and training software.

In this article, we’ll be looking at augmenting the FPS game genre, a popular mainstay of modern gaming and one that has little to no Perceptual Computing traction. This situation is partly due to the rigid interface expectations required from such games and partly to the relative newness of Perceptual Computing as an input medium.

As you read this article, you will be able to see that with a little work, any FPS can be transformed into something so much more. In a simple firefight or a horror-thriller, you don't want to be looking down at your keyboard to find the correct key—you want to stay immersed in the action. Figuring out the combination of keys to activate shields, recharge health, duck behind a sandbag, and reload within a heartbeat is the domain of the veteran FPS player, but these days games belong to the whole world not just the elite. Only Perceptual Computing has the power to provide this level of control without requiring extensive practice or lightning fast hand/eye coordination.


Figure 1. When reaction times are a factor, looking down at the keyboard is not an option

We’ve had microphones for years, but it has only been recently that voice recognition has reached a point where arbitrary conversations can be created between the player and the computer. It’s not perfect, but it’s sufficiently accurate to begin a realistic conversation within the context of the game world.


Figure 2. Wouldn’t it be great if you could just talk to characters with your own voice?

You’ve probably seen a few games now that use non-linear conversation engines to create a sense of dialog using multiple choices, or a weapon that has three or four modes of fire. Both these features can be augmented with voice control to create a much deeper sense of emersion and create a more humanistic interface with the game.

This article will look at detecting what the human player is doing and saying while playing a First Person experience, and converting that into something that makes sense in the gaming world.

2. Why Is This Important

As one of the youngest and now one of the largest media industries on the planet, the potential for advancement in game technology is incredible, and bridging the gap between user and computer is one of the most exciting. One step in this direction is a more believable immersive experience, and one that relies on our natural modes of interaction, instead of the artificial ones created for us.

With a camera that can sense what we are doing and a microphone that can pick up what we say, you have almost all the ingredients to bridge this gap entirely. It only remains for developers to take up the baton and see how far they can go.

For developers who want to push the envelope and innovate around emerging technologies, this subject is vitally important to the future of the First Person experience. There is only so much a physical controller can do, and for as long as we depend on it for all our game controls we will be confined to its limitations. For example, a controller cannot detect where we are looking in the game, it has to be fed in, which means more controls for the player. It cannot detect the intention of the player; it has to wait until a sequence of button presses has been correctly entered before the game can proceed. Now imagine a solution that eliminates this middle-man of the gaming world, and ask yourself how important it is for the future of gaming.


Figure 3. Creative* Interactive Gesture Camera; color, depth and microphone – bridging the reality gap

Imagine the future of FPS gaming. Imagine all your in-game conversations being conducted by talking to the characters instead of selecting buttons on the screen. Imagine your entire array of in-game player controls commanded via a small vocabulary of commonly spoken words. The importance of these methods cannot be understated, and they will surely form the basis of most, if not all, FPS game interfaces in the years to come.

3. Detect Player Leaning

You have probably played a few FPS games and are familiar with the Q and E keys to lean left and right to peek around corners. You might also have experienced a similar implementation where you can click the right mouse button to zoom your weapon around a corner or above an obstacle. Both game actions require additional controls from the player and add to the list of things to learn before the game starts to feel natural.

With a perceptual computing camera installed, you can detect where the head and shoulders of your human player lie in relation to the center of the screen. By leaning left and right in the real world, you can mimic this motion in the virtual game world. No additional buttons or controls are required, just lean over to peek around a corner, or sidestep a rocket, or dodge a blow from an attacker, or simply view an object from another angle.


Figure 4. Press E or lean your body to the right. Which one works for you?

In practice, however, you will find this solution has a serious issue., You will notice your gaming experience disrupted by a constantly moving (even jittering) perspective as the human player naturally shifts position as the game is played. It can be disruptive to some elements of the game such as cut-scenes and fine-grain controls such as using the crosshair to select small objects in the game. There are two solutions to this: the first is to create a series of regions that signal a shift to a more extreme lean angle, and the second is to disable this feature altogether in certain game modes as mentioned above.


Figure 5. Dividing a screen into horizontal regions allows better game leaning control

By having these regions defined, the majority of the gaming is conducted in the center zone, and only when the player makes extreme leaning motions does the augmentation kick in and shift the game perspective accordingly.

Implementing this technique is very simple and requires just a few commands. You can use the official Intel Perceptual Computing SDK or you can create your own commands from the raw depth data. Below is the initialization code for a module created for the DBP language and reduces the actual coding to just a few lines.

rem Init PC
perceptualmode=pc init()
pc update
normalx#=pc get body mass x()
normaly#=pc get body mass y()

The whole technique can be coded with just three commands. The first initializes the perceptual computing camera and returns whether the camera is present and working. The second command asks the camera to take a snapshot and do some common background calculations on the depth data. The last two lines grab something called a Body Mass Coordinate, which is the average coordinate of any foreground object in the field of the depth camera. For more information on the Body Mass Coordinate technique, read the article on Depth Data Techniques (http://software.intel.com/en-us/articles/perceptual-computing-depth-data-techniques).

Of course detecting the horizontal zones requires a few more simple lines of code, returning an integer value that denotes the mode and then choosing an appropriate angle and shift vector that can be applied to the player camera.

rem determine lean mode
do
 leanmode=0
 normalx#=pc get body mass x()/screen width()
 if normalx#<0.125
  leanmode=-2
 else
  if normalx#<0.25
   leanmode=-1
  else
   if normalx#>0.875
    leanmode=2
   else
    if normalx#>0.75
     leanmode=1
    endif
   endif
  endif
 endif
 leanangle#=0.0
 leanshiftx#=leanmode*5.0
 select leanmode
  case -2 : leanangle#=-7.0 : endcase
  case -1 : leanangle#=-3.0 : endcase
  case  1 : leanangle#= 3.0 : endcase
  case  2 : leanangle#= 7.0 : endcase
 endselect
 pc update
loop

Applying these lean vectors to the player camera is simplicity itself, and disabling it when the game is in certain modes will ensure you get the best of both worlds. Coding this in C++ or Unity simply requires a good head tracking system to achieve the same effect. To get access to this DBP module, please contact the author via twitter at https://twitter.com/leebambertgc. The buzz you get from actually peering around a corner is very cool, and is similar to virtual/augmented reality, but without the dizziness!

4. Detect Player Conversations

Earlier versions of the Intel® Perceptual Computing SDK had some issues with accurate voice detection, and even when it worked it only understood a U.S. accent. The latest SDK however is superb and can deal with multiple language accents and detect British vocals very well. Running the sample code in the SDK and parroting sentence after sentence proves just how uncannily accurate it is now, and you find yourself grinning at the spectacle.

If you’re a developer old enough to remember the ‘conversation engines’ of the 8-bit days, you will recall the experimental applications that involved the user typing anything they wanted, and the engine picking out specific trigger words and using those to carry on the conversation. It could get very realistic sometimes, but often ended with the fall-back of ‘and how do you feel about that?’


Figure 6. A simple conversation engine from the adventure game “Relics of Deldroneye”

Roll the clock forward about 30 years and those early experiments could actually turn out to be something quite valuable for a whole series of new innovations with Perceptual Computing. Thanks to the SDK, you can listen to the player and convert everything said into a string of text. Naturally, it does not get it right every time, but neither do humans (ever play Chinese whispers?). Once you have a string of text, you can have a lot of fun with figuring out what the player meant, and if it makes no sense, you can simply get your in-game character to repeat the question.

A simple example would be a shopkeeper in an FPS game, opening with the sentence, “what would you like sir?” The Intel® Perceptual Computing SDK also includes a text-to-speech engine so you can even get your characters to use the spoken word, much more preferred in modern games than the ‘text-on-screen’ method. Normally in an FPS game, you would either just press a key to continue the story, or have a small multi-choice menu of several responses. Let’s assume the choices are “nothing,” “give me a health kit,” or “I want some ammo.” In the traditional interface you would select a button representing the choice you wanted through some sort of user interface mechanism.

Using voice detection, you could parse the strings spoken by the player and look for any words that would indicate which of the three responses was used. It does not have to be the exact word or sentence as this would be almost impossible to expect and would just lead to frustration in the game. Instead, you would look for keywords in the sentence that indicate which of the three is most likely.

NOTHING = “nothing, nout, don’t anything, bye, goodbye, see ya”

HEALTH = “health, kit, medical, heal, energy”

AMMO = “ammo, weapon, gun, bullets, charge”

Of course, if the transaction was quite important in the game, you would ensure the choice made was correct with a second question to confirm it, such as “I have some brand new ammo, direct from the factory, will that do?” The answer of YES and NO can be detected with 100% certainty, which will allow the game to proceed as the player intended.

Of course this is the most complex form of voice detection and would require extensive testing and wide vocabulary of detections to make it work naturally. The payoff is a gaming experience beyond anything currently enjoyed, allowing the player to engage directly with characters in the game.

5. Detect Player Commands

An easier form of voice control is the single command method, which gives the player advance knowledge of a specific list of words they can use to control the game. The Intel® Perceptual Computing SDK has two voice recognition modes “dictation” and “command and control.” The former would be used in the above complex system and the latter for the technique below.

A game has many controls above and beyond simply moving and looking around, and depending on the type of game, can have nested control options dependent on the context you are in. You might select a weapon with a single key, but that weapon might have three different firing modes. Traditionally this would involve multiple key presses given the shortage of quick-access keys during high octane FPS action. Replace or supplement this with a voice command system, and you gain the ability to select the weapon and firing mode with a single word.


Figure 7. Just say the word “reload”, and say goodbye to a keyboard full of controls

The “command and control” mode allows very quick response to short words and sentences, but requires that the names you speak and the names detected are identical. Also you may find that certain words when spoken quickly will be detected as a slight variation on the word you had intended. A good trick is to add those variations to the database of detectable words so that a misinterpreted word still yields the action you wanted in the game. To this end it is recommended that you limit the database to as few words as that part of the game requires. For example, if you have not collected the “torch” in the game, you do not need to add “use torch” to the list of voice controls until it has been collected.

It is also recommended that you remove words that are too similar to each other so that the wrong action is not triggered at crucial moments in the game play. For example, you don’t want to set off a grenade when you meant to fire off a silent grappling hook over a wall to escape an enemy.

If the action you want to perform is not too dependent on quick reaction times, you can revert to the “dictation” mode and do more sophisticated controls such as the voice command “reload with armor piercing.” The parser would detect “reload,” “armor,” and “piercing,” The first word would trigger a reload, and the remaining ones would indicate a weapon firing mode change and trigger that.

When playing the game, using voice to control your status will start to feel like you have a helper sitting on your shoulder, making your progress through the game much more intuitive. Obviously there are some controls you want to keep on a trigger finger such as firing, moving, looking around, ducking, and other actions that require split-second reactions. The vast majority however can be handed over to the voice control system, and the more controls you have, the more this new methods wins over the old keyboard approach.

6. Tricks and Tips

 

Do’s

  • Using awareness of the player’s real-world position and motion to control elements within the game will create an immediate sense of connection. Deciding when to demonstrate that connection will be the key to a great integration of Perceptual Computing.
  • Use “dictation” for conversation engines and “command and control” for instant response voice commands. They can be mixed, providing reaction time does not impede game play.
  • If you are designing your game from scratch, consider developing a control system around the ability to sense real player position and voice commands. For example a spell casting game would benefit in many ways from Perceptual Computing as the primary input method.
  • When you are using real world player detection, ensure you specify a depth image stream of 60 frames per second to give your game the fastest possible performance.

Don’ts

  • Do not feed raw head tracking coordinates directly to the player camera, as this will create uncontrollable jittering and ruin the smooth rendering of any game.
  • Do not use voice control for game actions that require instantaneous responses. As accurate as voice control is, there is a noticeable delay between speaking the word and getting a response from the voice function.
  • Do not detect whole sentences in one string comparison. Parse the sentence into individual words and run string comparisons on each one against a larger database of word variations of similar meaning.

7. Final Thoughts

A veteran of the FPS gaming experience may well scoff at the concept of voice-activated weapons and real-world acrobatics to dodge rockets. The culture of modern gaming has created a total dependence on the mouse, keyboard, and controller as lifelines into these gaming worlds. Naturally, offering an alternative would be viewed with incredulity until the technology fully saturates into mainstream gaming. The same can be said of virtual reality technology, which for 20 years attempted to gain mainstream acceptance without success.

The critical difference today is that this technology is now fast enough and accurate enough for games. Speech detection 10 years ago was laughable and motion detection was a novelty, and no game developer would touch them with a barge pole. Thanks to the Intel Perceptual Computing SDK, we now have a practical technology to exploit and one that’s both accessible to everyone and supported by peripherals available at retail.

An opportunity exists for a few developers to really pioneer in this area, creating middleware and finished product that push the established model of what an FPS game actually is. It is said that among all the fields of computing, game technology is the one field most likely to push all aspects of the computing experience. No other software pushes the limits of the hardware as hard as games, pushing logic and graphics processing to the maximum (and often beyond) in an attempt to create a simulation more realistic and engaging than the year before. It’s fair to suppose that this notion extends to the very devices that control those games, and it’s realistic to predict that we’ll see many innovations in this area in years to come. The great news is that we already have one of those innovations right here, right now, and only requires us to show the world what amazing things it can do.

About The Author

When not writing articles, Lee Bamber is the CEO of The Game Creators (http://www.thegamecreators.com), a British company that specializes in the development and distribution of game creation tools. Established in 1999, the company and surrounding community of game makers are responsible for many popular brands including Dark Basic, FPS Creator, and most recently App Game Kit (AGK).

The application that inspired this article and the blog that tracked its seven week development can be found here: http://ultimatecoderchallenge.blogspot.co.uk/2013/02/lee-going-perceptual-part-one.html

Lee also chronicles his daily life as a coder, complete with screen shots and the occasional video here: http://fpscreloaded.blogspot.co.uk

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Tutorial Cocos2d-x* using Windows* - Part 1

$
0
0

In this tutorial, the first of a series on Cocos2d-x using Windows, we are going to show how to create a simple game using the Cocos2d-x framework in a Windows development environment.

What is Cocos2d-x?

Cocos2d-x is a cross-platform framework for games (and other graphical apps, like interactive books) based on the cocos2d for iOS*, but using C++, JavaScript*, or Lua* instead of Objective-C*.

One of the advantages of this framework is to create games that can be deployed on different platforms (Android*, iOS, Win32*, Windows* Phone, Windows* 8, Mac*, Linux*, etc.) keeping the same code base and making a few platform-specific adaptations for each one.

The source code of the framework is granted under the MIT License, and it be can be found here.

If you want to know more about Cocos2d-x and its documentation, check out: http://www.cocos2d-x.org/.

Creating your first game

1. Download the latest version of the framework from the site and unzip it in your development environment. In this tutorial, the version 2.2.2 was used, and the framework was unzipped to the Desktop (C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2).

2. To create a new project on cocos2d-x, we are going to use a Python* script (create_project.py) that creates the whole project structure inside the folder where the framework was unzipped. If you don’t have the Python runtime installed, download the 2.7.6 version from this link: http://www.python.org/download/.

3. Open the command prompt (cmd.exe) and execute the following commands:

  • Go to the script folder (it’s important to run the script inside the ‘project-creator’ folder)
    cd C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2\tools\project-creator
  • Run the script with the following command:
    python create_project.py -project MyFirstGame -package com.example.myfirstgame -language cpp

The parameters are as follows:
    project: The name of your project/game
    package: The package name of your app (e.g., com.myCompany.MyFirstGame)
    language: The programming language of the project (cpp, lua and JavaScript)

Note: To run the python command from the command prompt, add the folder where Python was installed to the environment variable path.

The created project will contain the base code of the game (Classes), the resources (images, audio, etc.), and one project for each framework-supported platform.

Building as a Win32 App (Windows* 7 or Windows 8 desktop mode)

Requirements:

1. Open the MyFirstGame.sln file inside the proj.win32 folder from the project directory using Visual Studio.

2. Build the project by pressing F6 (or use the menu Build -> Build Solution) and run the project pressing F5 (or use the menu Debug->Start Debugging).

If nothing went wrong, you’ll see the following window:

Building as a Windows Store App

Requirements:

To build your project as a Windows Store App, open the MyFirstgame.sln file inside the proj.winrt folder and build it using the same procedure that was used on the Win32 project.

After building and running, you’ll see the following screen:

Note: the cocos2d-x used in this tutorial didn’t work with Windows* 8.1.

Building as a Android App

Requirements:

In the same way that Python was added to the Windows path, add the directories, tools, and platform-tools from the Android SDK, the root directory from NDK, and the bin directory from Apache Ant in order to use them to build your app.

1. Open a new command prompt (cmd.exe) and run the following commands to configure the environment variables that are necessary to compile the Android app:

    set COCOS2DX_ROOT=C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2
    set NDK_TOOLCHAIN_VERSION=4.8
    set NDK_MODULE_PATH=%COCOS2DX_ROOT%;%COCOS2DX_ROOT%\cocos2dx\platform\third_party\android\prebuilt


The variables we used are:

    COCOS2DX_ROOT: the directory where the framework was unzipped    NDK_TOOLCHAIN_VERSION: the version of the NDK toolchain that will be used to build the project
    NDK_MODULE_PATH: the modules that need to be included on the NDK build. In this case, we are using the prebuilt modules from cocos2d-x

2. With the environment variables configured, go to the Android project folder:

    cd C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2\projects\MyFirstGame\proj.android

3. Copy the game resources (images, sounds, etc.) to the assets folder:

    rmdir /S /Q assets
    mkdir assets
    xcopy /E ..\Resources .\assets

4. Run the following command to build the native modules:

    ndk-build.cmd -C . APP_ABI="armeabi armeabi-v7a x86"

This command will generate the native libraries for three different architectures: ARM, ARM-NEON*, and x86. This will allow your game to run on these architectures taking the best from them.

5. After finishing the build process, build the Android app with the ant command:

    ant debug


    
Now, to install the app in a device or emulator use the command:

    adb install -r bin\MyFirstGame.apk

After that, you just need to run your app:

OK, now your game can run on at least three platforms: Android, Windows 7, and Windows 8! In the next tutorial of this series, we are going to talk about how the framework works, how to add an element on the screen and how to make it move.

Tutorial Cocos2d-x* using Windows* - Part 1

$
0
0

In this tutorial, the first of a series on Cocos2d-x using Windows, we are going to show how to create a simple game using the Cocos2d-x framework in a Windows development environment.

What is Cocos2d-x?

Cocos2d-x is a cross-platform framework for games (and other graphical apps, like interactive books) based on the cocos2d for iOS*, but using C++, JavaScript*, or Lua* instead of Objective-C*.

One of the advantages of this framework is to create games that can be deployed on different platforms (Android*, iOS, Win32*, Windows* Phone, Windows* 8, Mac*, Linux*, etc.) keeping the same code base and making a few platform-specific adaptations for each one.

The source code of the framework is granted under the MIT License, and it be can be found here.

If you want to know more about Cocos2d-x and its documentation, check out: http://www.cocos2d-x.org/.

Creating your first game

1. Download the latest version of the framework from the site and unzip it in your development environment. In this tutorial, the version 2.2.2 was used, and the framework was unzipped to the Desktop (C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2).

2. To create a new project on cocos2d-x, we are going to use a Python* script (create_project.py) that creates the whole project structure inside the folder where the framework was unzipped. If you don’t have the Python runtime installed, download the 2.7.6 version from this link: http://www.python.org/download/.

3. Open the command prompt (cmd.exe) and execute the following commands:

  • Go to the script folder (it’s important to run the script inside the ‘project-creator’ folder)
    cd C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2\tools\project-creator
  • Run the script with the following command:
    python create_project.py -project MyFirstGame -package com.example.myfirstgame -language cpp

The parameters are as follows:
    project: The name of your project/game
    package: The package name of your app (e.g., com.myCompany.MyFirstGame)
    language: The programming language of the project (cpp, lua and JavaScript)

Note: To run the python command from the command prompt, add the folder where Python was installed to the environment variable path.

The created project will contain the base code of the game (Classes), the resources (images, audio, etc.), and one project for each framework-supported platform.

Building as a Win32 App (Windows* 7 or Windows 8 desktop mode)

Requirements:

1. Open the MyFirstGame.sln file inside the proj.win32 folder from the project directory using Visual Studio.

2. Build the project by pressing F6 (or use the menu Build -> Build Solution) and run the project pressing F5 (or use the menu Debug->Start Debugging).

If nothing went wrong, you’ll see the following window:

Building as a Windows Store App

Requirements:

To build your project as a Windows Store App, open the MyFirstgame.sln file inside the proj.winrt folder and build it using the same procedure that was used on the Win32 project.

After building and running, you’ll see the following screen:

Note: the cocos2d-x used in this tutorial didn’t work with Windows* 8.1.

Building as a Android App

Requirements:

In the same way that Python was added to the Windows path, add the directories, tools, and platform-tools from the Android SDK, the root directory from NDK, and the bin directory from Apache Ant in order to use them to build your app.

1. Open a new command prompt (cmd.exe) and run the following commands to configure the environment variables that are necessary to compile the Android app:

    set COCOS2DX_ROOT=C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2
    set NDK_TOOLCHAIN_VERSION=4.8
    set NDK_MODULE_PATH=%COCOS2DX_ROOT%;%COCOS2DX_ROOT%\cocos2dx\platform\third_party\android\prebuilt


The variables we used are:

    COCOS2DX_ROOT: the directory where the framework was unzipped    NDK_TOOLCHAIN_VERSION: the version of the NDK toolchain that will be used to build the project
    NDK_MODULE_PATH: the modules that need to be included on the NDK build. In this case, we are using the prebuilt modules from cocos2d-x

2. With the environment variables configured, go to the Android project folder:

    cd C:\Users\felipe.pedroso\Desktop\cocos2d-x-2.2\projects\MyFirstGame\proj.android

3. Copy the game resources (images, sounds, etc.) to the assets folder:

    rmdir /S /Q assets
    mkdir assets
    xcopy /E ..\Resources .\assets

4. Run the following command to build the native modules:

    ndk-build.cmd -C . APP_ABI="armeabi armeabi-v7a x86"

This command will generate the native libraries for three different architectures: ARM, ARM-NEON*, and x86. This will allow your game to run on these architectures taking the best from them.

5. After finishing the build process, build the Android app with the ant command:

    ant debug


    
Now, to install the app in a device or emulator use the command:

    adb install -r bin\MyFirstGame.apk

After that, you just need to run your app:

OK, now your game can run on at least three platforms: Android, Windows 7, and Windows 8! In the next tutorial of this series, we are going to talk about how the framework works, how to add an element on the screen and how to make it move.

Code size optimization using the Intel® C/C++ Compiler

$
0
0

Code size optimization is a key factor, especially critical in embedded systems requiring code size reduction at the cost of application speed! Application developed for an embedded system is generally tuned for a particular processor with a finite memory size and hence memory is the main cost component of an embedded product. Directly impacting the memory requirement in an embedded system is the code size of the application, as reduced code size means lesser memory usage and lower cost of the product. In addition, with code size optimized you can add more functionality; improve code quality and therefore reliability as well. It is therefore natural for developers, especially those developing embedded software to optimize their application to achieve a proper trade-off between code size and performance.

Click here to continue reading

Krita* Gemini* - Twice as Nice on a 2-in-1

$
0
0

Download PDF

Why 2-in-1

A 2 in 1 is a PC that transforms between a laptop computer and a tablet. Laptop mode (sometimes referred to as desktop mode) allows a keyboard and mouse to be used as the primary input devices. Tablet mode relies on the touchscreen, thus requiring finger or stylus interaction. A 2 in 1, like the Intel® Ultrabook™ 2 in 1, offers precision and control with multiple input options that allow you to type when you need to work and touch when you want to play.

Developers have to consider multiple scenarios in modifying their applications to take advantage of this new type of transformable computer. Some applications may want to keep the menus and appearance nearly identical in both modes. While others, like Krita Gemini for Windows* 8 (Reference 1), will want to carefully select what is highlighted and made available in each user interface mode. Krita is a program for sketching and painting, that offers an end-to-end solution for creating digital painting files from scratch (Reference 2). This article will discuss how the Krita developers added 2 in 1 mode-awareness - including implementation of both automatic and user-selected mode switching and some of the areas developers should consider when creating applications for the 2 in 1 experience to their applications.

Introduction

Over the years, computers have used a variety of input methods, from punch cards to command lines to point-and-click. With the adoption of touch screens, we can now point-and-click with a mouse, stylus, or fingers. Most of us are not ready to do everything with touch, and with mode-aware applications like Krita Gemini, we don’t have to. 2 in 1s, like an Intel® Ultrabook™ 2 in 1, can deliver the user interface mode that gives the best experience possible, on one device.

There are multiple ways that a 2 in 1 computer can transform between laptop and tablet modes (Figure 1 & Figure 2). There are many more examples of 2 in 1 computers on the Intel website (Reference 3). The computer can transform into tablet-mode from laptop-mode by detaching the screen from the keyboard or using another means to disable the keyboard and make the screen the primary input device (such as folding the screen on top of the keyboard). Computer manufacturers are beginning to provide this hardware transition information to the operating system. The Windows* 8 API event, WM_SETTINGCHANGE and the “ConvertibleSlateMode” text parameter, signal the automatic laptop to tablet and back to laptop mode changes. It is also a good idea for developers to include a manual mode change button for users’ convenience as well.

Just as there are multiple ways that the 2 in 1 can transform between laptop and tablet modes, software can be designed in different ways to respond to the transformation. In some cases it may be desirable to keep the UI as close to the laptop mode as possible, while in other cases you may want to make more significant changes to the UI. Intel has worked with many vendors to help them add 2 in 1 awareness to their applications. Intel helped KO GmBH combine the functionality of their Krita Touch application with their popular Krita open source painting program (laptop application) in the new Krita Gemini application. The Krita project is an active development community, welcoming new ideas and maintaining quality support. The team added the mechanisms required to provide seamless transition from the laptop “mouse and keyboard” mode to the touch interface for tablet mode. See Krita Gemini’s user interface (UI) transformations in action in the short video in Figure 3.


Figure 3: Video - Krita Gemini UI Change – click icon to run

Create in Tablet Mode, Refine in Laptop Mode

The Gemini team set out to maximize the user experience in the two modes of operation. In Figure 4 & Figure 5 you can see that the UI changes from one mode to the other are many and dramatic. This allows the user to seamlessly move from drawing “in the field” while in tablet mode to touch-up and finer detail work when in laptop mode.


Figure 4:Krita Gemini tablet user interface


Figure 5: Krita Gemini laptop user interface

There are three main steps to making an application transformable between the two modes of operation.

Step one; the application must be touch aware. We were somewhat lucky in that the touch-aware step was started well ahead of the 2 in 1 activity. Usually this is a heavier lift than the transition to and from tablet mode work. Intel has published articles on adding touch input to a Windows 8 application (Reference 4).

Step two, add 2 in 1 awareness. The first part of the video (Figure 3) above demonstrates the automatic, sensor activated mode change, a rotation in this case (Figure 6). After that the user-initiated transition via a button in the application is shown (Figure 7).


Figure 6:Sensor-state activated 2 in 1 mode transition


Figure 7:Switch to Sketch transition button – user initiated action for laptop to tablet mode

Support for automatic transitions requires the sensor state to be defined, monitored, and appropriate actions to be taken once the state is known. In addition, a user initiated mode transition3 should be included as a courtesy to the user should she wish to be in the tablet mode when the code favors laptop mode. You can reference the Intel article “How to Write a 2-in-1 Aware Application” for an example approach to adding the sensor-based transition (Reference 5). Krita’s code for managing the transitions from one mode to the other can be found in their source code by searching for “SlateMode” (Reference 6). Krita is released under a GNU Public License. Please refer to source code repository for the latest information (Reference 7).

// Snip from Gemini - Define 2-in1 mode hardware states:

#ifdef Q_OS_WIN
#include <shellapi.h>
#define SM_CONVERTIBLESLATEMODE 0x2003
#define SM_SYSTEMDOCKED 0x2004
#endif

Not all touch-enabled computers offer the automatic transition, so we suggest you do as the Krita Gemini team did here and include a button in your application to allow the user to manually initiate the transition from one mode to the other. Gemini’s button is shown in Figure 7. The button-initiated UI transition performs the same functions as the mechanical-sensor-initiated transition. The screen information and default input device will change from touch & large icons in tablet mode to keyboard, mouse and smaller icons in the laptop mode. However, since the sensor path is not there the button method must perform the screen, icon, and default input device changes without the sensor-state information. Therefore, developers should provide a path for the user to change from one mode to the other with touch or mouse regardless of the state of the button-initiated UI state in case the user chooses an inappropriate mode.

The button definition - Kaction() - as well as its states and actions are shown in the code below (Reference 6):

// Snip from Gemini - Define 2-in1 Mode Transition Button:

         toDesktop = new KAction(q);
         toDesktop->setEnabled(false);
         toDesktop->setText(tr("Switch to Desktop"));
SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchDesktopForced()));
         connect(toDesktop,
SIGNAL(triggered(Qt::MouseButtons,Qt::KeyboardModifiers)), q, SLOT(switchToDesktop()));
sketchView->engine()->rootContext()->setContextProperty("switchToDesktop
sketchView->Action", toDesktop);

Engineers then took on the task of handling the events triggered by the button. Checking the last known state of the system first since the code cannot assume it is on a 2-in-1 system, then changing the mode. (Reference 6):

// Snip from Gemini - Perform 2-in1 Mode Transition via Button:

#ifdef Q_OS_WIN
bool MainWindow::winEvent( MSG * message, long * result ) {
     if (message && message->message == WM_SETTINGCHANGE && message->lParam)
     {
         if (wcscmp(TEXT("ConvertibleSlateMode"), (TCHAR *) message->lParam) == 0)
             d->notifySlateModeChange();
         else if (wcscmp(TEXT("SystemDockMode"), (TCHAR *) 
message->lParam) == 0)
             d->notifyDockingModeChange();
         *result = 0;
         return true;
     }
     return false;
}
#endif

void MainWindow::Private::notifySlateModeChange()
{
#ifdef Q_OS_WIN
     bool bSlateMode = (GetSystemMetrics(SM_CONVERTIBLESLATEMODE) == 0);

     if (slateMode != bSlateMode)
     {
         slateMode = bSlateMode;
         emit q->slateModeChanged();
         if (forceSketch || (slateMode && !forceDesktop))
         {
             if (!toSketch || (toSketch && toSketch->isEnabled()))
                 q->switchToSketch();
         }
         else
         {
                 q->switchToDesktop();
         }
         //qDebug() << "Slate mode is now"<< slateMode;
     }
#endif
}

void MainWindow::Private::notifyDockingModeChange()
{
#ifdef Q_OS_WIN
     bool bDocked = (GetSystemMetrics(SM_SYSTEMDOCKED) != 0);

     if (docked != bDocked)
     {
         docked = bDocked;
         //qDebug() << "Docking mode is now"<< docked;
     }
#endif
}

Step three, fix issues discovered during testing. While using the palette in touch or mouse mode is fairly easy, the workspace itself needs to hold focus and zoom consistent with the user’s expectations. Therefore, making everything bigger was not an option. Controls got bigger for touch interaction in tablet mode, but the screen image itself needed to be managed at a different level as to keep an expected user experience. Notice in the video (Figure 3) the image in the edit pane stays the same size on the screen from one mode to the other. This was an area that took creative solutions from the developers to reserve screen real estate to hold the image consistent. Another issue was that an initial effort had both UIs running which adversely affected performance due to both UIs sharing the same graphics resources. Adjustments were made in both UIs to keep the allotted resource requirements as distinct as possible and prioritize the active UI wherever possible.

Wrap-up

As you can see, adding 2 in 1 mode awareness to your application is a pretty straightforward process. You need to look at how your users will interact with your application when in one interactive mode versus the other. Read the Intel article “Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs“ for more information on creating an application with a transforming user interface (Reference 8). For Krita Gemini, the decision was made to make creating drawings and art simple while in tablet mode and add the finishing touches to those creations while in the laptop mode. What can you highlight in your application when presenting it to users in tablet mode versus laptop mode?

References

  1. Krita Gemini General Information
  2. Krita Gemini executable download (scroll to Krita Gemini link)
  3. Intel.com 2 in 1 information page
  4. Intel Article: Mixing Stylus and Touch Input on Windows* 8 by Meghana Rao
  5. Intel Article: How to Write a 2-in-1 Aware Application by Stephan Rogers
  6. Krita Gemini mode transition source code download
  7. KO GmbH Krita Gemini source code and license repository
  8. Intel® Developer Forum 2013 Presentation by Meghana Rao (pdf) - Write Transformational Applications for 2 in 1 Devices Based on Ultrabook™ Designs
  9. Krita 2 in 1 UI Change Video on IDZ or YouTube*

About the Author

Tim Duncan is an Intel Engineer and is described by friends as “Mr. Gidget-Gadget.” Currently helping developers integrate technology into solutions, Tim has decades of industry experience, from chip manufacturing to systems integration. Find him on the Intel® Developer Zone as Tim Duncan (Intel)

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2013-2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

 


Touch Developer Guide for Ultra Mobile Devices

$
0
0

Touch Developer Guide

 

Download PDF

Revision 2.0
January, 2014

Contents

Revision History
Figures
Tables
Abstract
Introduction
Hardware Requirements
Operating Environments for Ultra Mobile Devices
Touch Interactions: Pointer, Gesture, and Manipulation
  Pointer Interactions
  Manipulation and Gesture Interactions
  Custom Gesture Recognition
Touch Support in Web Browsers
  Internet Explorer 10 and its Compatibility with Windows 7
  Internet Explorer 11
Identifying Touch Capability
  Windows 7 and Windows 8 Desktop
  Windows 8 (Windows Store apps)
  Web apps
UI design for Touch-Enabled Devices
Tips for Building Optimized, Responsive Apps
Resources for Developing Touch Applications
  Related Articles on Intel Developer Zone:
  Related Articles on MSDN
  Web Apps
  Videos
Summary
About the Author
Notices

Figures

Figure 1. Snippet for determining if browser is Internet Explorer*
Figure 2. Windows* 7 Example for identifying touch capability
Figure 3. Windows*UI Example for Identifying Touch Capability

Tables

Table 1. Pointer APIs available for Touch-Enabled Devices
Table 2. Basic gestures used for manipulation interactions
Table 3. Gesture Interfaces available for Touch-Enabled Devices
Table 4. Standard expected interactions and consequences for touch interactions
Table 5. Basic gestures defined for touchpads – Windows* 8.1
Table 6. Touch Interfaces for Internet Explorer* 10
Table 7. Scrolling and Zooming Properties for Internet Explorer* Versions 10 and 11
Table 8. Considerations for Touch-Enabled Apps

 

Abstract

This Guide contains information about the APIs that application developers need to use when they are developing apps targeted for Ultra Mobile Devices (PCs, Ultrabook™ devices, 2-in-1s, tablets) and Adaptive All in Ones, which have touch screens and multiple usages, such as a monitor or a multi-user tablet that may be easily moved to communal locations. This guide will cover the interfaces required for Windows* (7 through 8.1) as well as for web apps. It will also describe what the common user expectations for gestures are and provide guidance for developing satisfying touch interfaces. Lastly, some tips on optimizing touch-enabled apps will be provided.

Introduction

Now that there are many touch-capable, Ultra Mobile Devices available, it is important for software developers to create apps that are not only designed for touch input but also adapt to the different layouts that apply to various screen sizes and positions (portrait vs. landscape). The following list provides basic design considerations for touch-enabled apps.

  • Touch interfaces imply bigger targets. For 96 dots per inch screens, the most frequently used controls should be at least 40X40 pixels, big enough to be uniquely indicated by a fingertip. For higher resolution screens, use the minimum of a ½ inch square. There should be a minimum of 10 pixels (1/8 of an inch) for padding between the touch targets.
  • Provide immediate feedback. Elements that are interactive should react when touched, either by changing color, size, or by moving. When the app provides smooth, responsive visual feedback while panning, zooming, and rotating, it feels highly interactive.
  • Moveable content follows finger. When a user drags an element, it should follow the user’s finger when moving.
  • Interactions should be reversible or at least cancellable.
  • Allow multi-touch interactions. Touch interactions should not change based on the number of fingers that are touching the screen.
  • Allow both touch and mouse support. The user may be working from a system that does not have touch capabilities.
  • Don’t rely on hover. While browsers will fake hover on tap, if there is an underlying link, the hover state does not stay long enough for the user to see since the tap action will also fire the link.
  • The app adapts to the device/environment that the user chooses. This means providing layouts with the correct resolution for every likely method/device for which the user will be able to run the app.
  • Optimize performance that results in a highly responsive app. Long delays when interacting with touch elements cause frustration. Ensure that complex processing runs in the background when and where possible.

While the basic design principles for touch apply to all form factors, designing touch interfaces for Adaptive All in One (Adaptive AIO) devices brings further considerations. Apps that are well-suited for the Adaptive AIO may have a multi-user component, for example, games where there is more than one player. Consider the implications of multiple users interacting with your app at the same time. The touch APIs for Windows and web apps can track more than one touch and/or drag at a time, but you still must design your app to behave sensibly under those conditions.

Even if you support multiple touch and/or drag events occurring at the same time, being able to differentiate more than one user is a very difficult task. If you need this capability, the simplest way to distinguish multiple users is to partition the display where each player is assigned their own portion during the game or activity.

For more information on writing touch-enabled apps, refer to the MSDN article, Touch Interaction Design. Note: Please refer to the MSDN Terms of Use for licensing details.

 

Hardware Requirements

Consumers have a variety of touch-capable devices to choose from, ranging from smartphones, tablets, 2-in-1s, PCs, to Adaptive AIOs. Developers are faced with the challenge of developing apps that feel natural for each form-factor. In general, touch-enabled apps should be designed to run on any of their targeted devices while taking full advantage of the touch capabilities of each. Devices that are capable of at least 10 simultaneous points represent the high-end of touch-capability and should be the design point for an app’s touch interface.

Since 2011, OEMs have built devices with touchscreens based on Intel processors:

  • 2nd generation Intel® Core™ processor family (codenamed Sandy Bridge).
  • 3rd generation Intel® Core™ processor family (codenamed Ivy Bridge).
  • 4th generation Intel® Core™ processor family (codenamed Haswell).

With so many Ultra Mobile Devices available to consumers, it has become more crucial than ever to develop apps that are touch-enabled for any form-factor for which they are targeted.

The rest of this Developer Guide assumes that the target platform is a touch-capable system. Software designed for these devices can be adapted to other touch-enabled devices that either run the same OS or run in the web browser.

Operating Environments for Ultra Mobile Devices

Designing apps today requires careful considerations of what environment customers most often use as well as which environment an app is best suited for. Whether an app is targeted for Windows 7 or Windows 8 Desktop or as a Windows* Store app, the developer needs to understand which interfaces are applicable.

Windows Store apps must use the WinRT APIs. If an app is to run in the Windows 8+ Desktop environment, there are two choices: the legacy APIs from Windows 7 or the new Windows 8+ APIs for touch. These interfaces will be discussed further in the sections below. Other options exist for developing web apps. Touch interfaces available for web browsers are also discussed below.

Touch Interactions: Pointer, Gesture, and Manipulation

There are varying levels of interpretation of touch input. Pointer events are the most basic because they represent individual points of touch contact. Gesture and Manipulation events are built on that foundation. Gesture events provide an easy way to capture simple tap-and-hold gestures. Manipulation events are for touch interactions that use physical gestures to emulate physical manipulation of UI elements. Manipulation events provide a more natural experience when the user interacts with UI elements on the screen. The available touch interfaces have varying levels of support for these three levels of interpretation.

 

Pointer Interactions

A pointer event is a single, unique input or “contact” from an input device such as a mouse, stylus, single finger, or multiple fingers. When a contact is made, the system creates a pointer when it is first detected and then it is destroyed when the pointer leaves the detection range or is canceled. In the case of multi-touch input, each contact is a unique pointer. Table 1 shows the interfaces for retrieving basic pointer events that are available to mobile devices running Windows 7 and Windows 8+.

Table 1. Pointer APIs available for Touch-Enabled Devices

OS CompatibilityTouch InterfaceRemarks
Windows* 7 (Desktop)WM_TOUCH
  • Also compatible with the Windows 8+ Desktop environment.
  • Maximum number of simultaneous touches limited by hardware.
  • No built-in gesture recognition.
  • Must call RegisterTouchWindow since WM_TOUCH messages are not sent by default.
Windows 8+ only (Desktop)WM_POINTER
  • Applicable only to the Windows 8+ Desktop environment.
  • By default Windows 8+ animations and interaction feedback is generated and available for further processing.
Windows Store appPointerPoint
  • Applicable only for Windows Store apps.

 

Touch Interfaces available for Windows 7 and Windows 8+:

Refer to Guidelines for common user interactions on MSDN.

 

Windows 7 and Windows 8 Desktop Touch Interface: WM_TOUCH

The WM_TOUCH message can be used to indicate that one or more pointers, such as a finger or pen, have made contact on the screen.

Sample Code:Guidance:

 

Windows 8 and 8.1 Desktop Touch Interface: WM_POINTER

The WM_POINTER messages are part of the Direct Manipulation APIs and are specific to the Windows 8+ Desktop. This interface can be used to capture individual touch pointers as well as Gestures and Manipulations. The WM_POINTER messages will be discussed further in the section on Manipulation and Gesture Interactions.

Reference on MSDN:Direct Manipulation APIs

 

Windows 8.1 Desktop adds direct manipulation support using the touchpad: Prior to Windows 8.1, device support in Windows assumed that the only sources of user input events were from a keyboard, mouse, touchscreen, or pen. Touchpads (present on many laptop designs) were treated as a mouse source. With Windows 8.1, the APIs have been expanded to treat touchpads as a distinct input method. For more information, refer to Touchpad interactions on MSDN.

There is also a new API: GetPointerInputTransform that is related to touch processing. This API is used for retrieving one or more 4X4 matrices when transforming the screen coordinates from a pointer input message to client coordinates, effectively inverting the mapping from the input device to screen coordinates.

Windows* Modern UI Touch Interface: PointerPoint

The PointerPoint class is part of the Windows Runtime environment and is compatible only with Windows Store apps. It provides basic properties for the input pointer associated with a single mouse, stylus, or touch contact. MSDN has sample code that can help developers get started working with the PointerPoint interface.

Sample Code on MSDN: Input: XAML user input events
Note: Please refer to the MSDN Terms of Use for licensing

 

Manipulation and Gesture Interactions

Gesture events are used to handle static-finger interactions such as tapping and press-and-hold. Double-tap and right-tap are derived from these basic gestures:

  • Gestures: the physical act or motion performed on or by the input device that can be one or more fingers, a stylus, or a mouse.
  • Manipulation: the immediate, ongoing response an object has to a gesture. For example, the slide gesture causes an object to move in some way.
  • Interactions: how a manipulation is interpreted and the command or action that results from the manipulation. For example, both the slide and swipe gestures are similar but the results vary according to whether a distance threshold is exceeded.

Table 2. Basic gestures used for manipulation interactions

GestureTypeDescription
Press and HoldStatic GestureA single contact is detected and does not move. Press and hold causes detailed information or teaching visuals to be displayed without a commitment to an action.
TapStatic GestureOne finger touches the screen and lifts up immediately.
TurnManipulation GestureTwo or more fingers touch the screen and move in a clockwise or counter-clockwise direction.
SlideManipulation GestureOne or more fingers touch the screen and move in the same direction (also called Panning)
SwipeManipulation GestureOne or more fingers touch the screen and move a short distance in the same direction.
PinchManipulation GestureTwo or more fingers touch the screen and move closer together.
StretchManipulation GestureTwo or more fingers touch the screen and move further apart.

Table 3. Gesture Interfaces available for Touch-Enabled Devices

OS CompatibilityGESTURE InterfaceRemarks
Windows* 7
Windows 8+ (Desktop)
WM_TOUCH + IManipulationProcessor
  • This combination gives the developer the functionality of the WM_POINTER API that is available only to Windows 8/8.1 desktop.
  • Maximum Touch points dictated by hardware.
Windows 7
Windows 8+ (Desktop)
WM_GESTURE + GESTUREINFO structure
  • Maximum of two simultaneous touch points
  • No simultaneous gestures
  • If the app requires more complex manipulations than what is available from the WM_GESTURE message, a custom gesture recognizer needs to be written using the WM_TOUCH interface.
Windows 8+ (Desktop)WM_POINTER
  • Gesture interactions result from the use of the Direct Manipulation APIs, which take in a stream of the pointer input messages.
Windows Modern UIPointerPoint
  • Gesture interactions result from the use of GestureRecognizer, which takes the output from PointerPoint.
Windows 8.1GetPointerInputTransform
  • New Direct Manipulation API. Applicable to interactions with the touchpad.

 

 

Table 4. Standard expected interactions and consequences for touch interactions

InteractionsDescription
Press and Hold to learnCauses detailed information or teaching visuals to be displayed.
Tap for primary actionInvokes a primary action, for example launching an application or executing a command.
Slide to panUsed primarily for panning interactions but can also be used for moving, drawing, or writing. Can also be used to target small, densely packed elements by scrubbing (sliding the finger over related object such as radio buttons).
Swipe to select, command, and moveSliding the finger a short distance, perpendicular to the panning direction, selects objects in a list or grid.
Pinch and stretch to zoomNot only used for resizing, this interaction also enables jumping to the beginning, end, or anywhere within the content with Semantic Zoom. A SemanticZoom control results in a zoomed out view for showing groups of items and quick ways to go back to them.
Turn to rotateRotating with two or more fingers causes an object to rotate.
Swipe from edge for app commandsApp commands are revealed by swiping from the bottom or top edge of the screen.
Swipe from edge for system commendsSwiping from the right edge of the screen shows the “charms” that are used for system commands. Swiping from the left edge results in cycling through currently running apps and sliding from the top edge toward the bottom of the screen closes the app. Sliding from the top edge down and to the left or right edge snaps the current app to that side of the screen.

 

Table 5. Basic gestures defined for touchpads – Windows* 8.1

GestureDescription
Hover to learnAllows the user to hover over an element to get more detailed information or teaching visuals without a commitment to action.
Single finger tap for primary actionInvokes the primary action (such as launching an app).
Two finger tap to right-clickTapping with two fingers simultaneously displays the app bar with global commands or on an element to select it and display the app bar with contextual commands.
Two finger slide to panUsed primarily for panning interactions.
Pinch and stretch to zoomUsed to resize and for semantic zooming.
Single finger press and slide to rearrangeDrag and drops an element.
Single finger press and slide to select textAllows pressing from within selectable text and sliding to select it. Double-tap is used to select a word.
Single finger press and slide to select text – Edges for system commands
  • Swiping from the right edge of the screen: reveals the charms exposing system commands
  • Left and right click zone: emulates left and right mouse buttons.

 

Interpreting Manipulation and Gesture Interactions for Windows 7 Desktop

The IManipulationProcessor interface can be used in conjunction with the WM_TOUCH API to provide a way to add translation, rotation, scaling, and inertia to UI objects. This combination provides functionality similar to the gesture recognizing features of WM_POINTER. Once the Manipulation Processor is enabled, manipulation starts as soon as a touch gesture is initiated.

Sample Code:

 

WM_GESTURE messages have a structure called GESTUREINFO that is available for the interpretation of gestures and manipulations. The MSDN web page for WM_GESTURE shows an example of how to obtain gesture-specific information using the GESTUREINFO structure.

Note that WM_GESTURE has limitations, such as the maximum number of simultaneous touch inputs is only two and it does not support simultaneous gestures. For apps that require more capability but still need to support Windows 7 desktop, use the WM_TOUCH interface and either write a custom gesture recognizer, as detailed in the section Custom Gesture Recognition below, or use the Manipulation Processor interface with WM_TOUCH.

Sample Code on Intel Developer Zone (WM_GESTURE API + GESTUREINFO: Sample Application: Touch for Desktop

For more information on writing touch-enabled apps, refer to the MSDN article: Touch Interaction Design.

 

Handling Manipulation and Gesture Interactions for Windows 8+ Desktop Apps

Applications targeted only for the Windows 8 Desktop can use the Direct Manipulation APIs (WM_POINTER messages). The pointer messages are passed to an internal Interaction Context object that performs recognition on the manipulation without the need to implement a custom gesture recognizer. There is a callback infrastructure where all interactions involving tracked contacts are managed.

Direct Manipulation is designed to handle both manipulation and gesture interactions and supports two models for processing input:

  1. Automatic/Independent: Window messages are automatically intercepted by Direct Manipulation on the delegate thread and handled without running application code, making it independent of the application.
  2. Manual/Dependent: Window messages are received by the window procedure running in the UI thread, which then calls Direct Manipulation to process the message, making it dependent on the application.

Gestures can be captured by initializing Direct Manipulation and preparing the system for input processing.

Refer to the Quickstart: Direct Manipulation on MSDN for an outline of the API calls required to accomplish typical tasks when working with Direct Manipulation.

 

Handling Manipulation and Gesture Interactions for Windows 8 Store Apps

The GestureRecognizer API is used to handle pointer input to process manipulation and gesture events. Each object returned by the PointerPoint method is used to feed pointer data to the GestureRecognizer. The gesture recognizer listens for and handles the pointer input and processes the static gesture events. For an example of how to create a GestureRecognizer object and then enable manipulation gesture events on that object see the MSDN GestureRecognizer web page (referenced below.)

 

Direct Manipulation Support for Windows 8.1 Store Apps

For Windows 8.1, Direct Manipulation has been added for touchpad gestures. Note that the ScrollViewer class will be updated, and as a result, apps will no longer see UIElementPointerWheelChanged events that were available in Windows 8.

 

Custom Gesture Recognition

When possible, use the built-in gesture recognizers (see Table 3). If the provided gesture and manipulation interfaces do not provide the functionality that is needed, or if the app needs to disambiguate between taps and gestures more rapidly, it may be necessary to write a custom gesture recognition algorithm. If this is the case, users expect an intuitive experience involving direct interaction with the UI elements in the app. It is best to base custom interactions on the standard controls to keep user actions consistent and discoverable. Custom interactions should only be used if there is a clear, well-defined requirement and basic interactions don't support the app’s desired functionality. See Table 4 for the list of common and expected interactions and consequences for touch interactions.

Code Sample on Intel Developer Zone (WM_TOUCH with custom gesture recognition): Touch for Windows Desktop

 

Touch Support in Web Browsers

Touch input is also available to apps running in web browsers, with varying degrees of support depending on the browser. Since web browser capabilities change rapidly, it is generally better to detect supported features instead of specific browsers. Feature detection has proven to be a more effective technique once the determination has been made if it is Internet Explorer* (IE) 11, a browser built on Webkit*, or a different browser that requires support. Feature detection requires less maintenance for the following reasons:

  • New browsers get released and existing browsers are often updated. Existing code may not factor in the new browser versions. Updated browsers may support standards and features that were not supported when the browser detection code was designed.
  • New devices frequently include new versions of browsers, and so browser detection code must be reviewed continually to support the new browsers. Creating customized implementations for each browser can become extremely complicated.
  • Many browsers support the ability to modify the user-agent string, making browser detection difficult to accurately identify.

WebKit powers Apple Safari* and Google Chrome*, and soon Opera will move their browser over to use it. Internet Explorer 10 does not use WebKit; however, both WebKit and IE 10 are built on top of the Document Object Model (DOM) Level 3 Core Specification. To review the standards associated with touch events, refer to the standard’s Touch Events Version 1, dated January 2013.

References:Note: Please refer to the MSDN Terms of Use for licensing details.

 

IE 10 has its own touch interfaces that must be called for processing touch events. Use the navigator object with the userAgent property to determine if the browser supports the desired features. The following example code can be used to determine if the browser is IE.

Usage:
<script type="text/JavaScript">
If (navigator.userAgent.indexOf(“MSIE”)>0)
    { 
         // Run custom code for Internet Explorer.
    }
</script>

Figure 1. Snippet for determining if browser is Internet Explorer*

Use the hasFeature method to determine if specific features are supported in the browser. For example, here is how to determine if a browser supports touch events (this works for IE 10 as well):

  var touchSupported = document.implementation.hasFeature("touch-events","3.0");

Where “touch-events” is the feature that we are checking for and “3.0” is the DOM specification level that we are interested in. An app can then listen for the following touch events: touchstart, touchend, touchmove, and touchcancel.

 

To process touch events using a WebKit-based browser (Chrome, Safari, etc.), simply set up the following three events to cover the main input states:

  canvas.addEventListener( ‘touchstart’, onTouchStart, false );
  canvas.addEventListener( ‘touchmove’, onTouchMove, false);
  canvas.addEventListener( ‘touchend’, onTouchEnd, false);

For Internet Explorer, reference the MSPointer event instead:

  canvas.addEventListener( ‘MSPointerDown’, onTouchStart, false );
  canvas.addEventListener( ‘MSPointerMove’, onTouchMove, false);
  canvas.addEventListener( ‘MSPointerUp’, onTouchEnd, false);

Similarly, there are three gesture event listeners: gestureStart, gestureChange, and gestureEnd for non-IE 10 browsers.

Download sample code handling DOM pointer events on MSDN: Input DOM pointer event handling sample. Note: Please refer to the MSDN Terms of Use for licensing details.

 

Internet Explorer 10 and its Compatibility with Windows 7

While IE 10 does not use WebKit, it is built on top of the DOM Level 3 Events, HTML5, and Progress Event standards. This section provides information about IE 10 and how it interacts on Windows 7.

 

Internet Explorer 10 on Windows 7 handles touch and pen input as simulated mouse input for the following Document Object Model (DOM) events:

  • MSPointerCancel
  • MSPointerDown
  • MSPointerMove
  • MSPointerOver
  • MSPointerUp

IE 10 on Windows 7 will not fire any of the following DOM Gesture events:

  • MSGestureChange
  • MSGestureEnd
  • MSGestureHold
  • MSGestureStart
  • MSGestureTap
  • MSManipulationStateChanged

Table 6. Touch Interfaces for Internet Explorer* 10

InterfaceWindows* 7 MSVS 2010Windows 8 MSVS 2012 (Desktop)Windows 8 Modern UIRemarks
MSGESTURENoYesYes
  • Get high level gestures such as hold, pan, and tap easily without capturing every pointer event individually
MSPOINTERYesYesYes
  • Part of DOM Object Model (DOM) Core
  • The getCurrentPoint and getIntermediatePoints methods retrieve a collection of PointerPoint objects and are available only on Windows 8.

 

For more information on developing touch-enabled web apps for IE 10 (MSDN): Internet Explorer 10 Guide for Developers

Sample code on MSDN: Input: Manipulations and gestures (JavaScript*)

Note: Please refer to the MSDN Terms of Use for licensing details.

 

Internet Explorer 11

IE 11 adds the following touch enhancements / APIs:

  • Direct manipulation for mouse, keyboard, touchpad, and touch, including hardware-accelerated pan and zoom to all input types.
  • Updates to the existing MSPointer APIs to reflect the latest Candidate Recommendation specification. IE 11 supports unprefixed Pointer Events.
  • New API, the msZoomTo method. This new method scrolls and/or zooms an element to a specified location and uses animation. Note: msZoomTo is not supported on Windows 7.
For more information on developing touch-enabled web apps for IE 11 (MSDN): Internet Explorer 11 Guide for Developers

 

IE 10 and IE 11 also include Cascading Style Sheets with features applicable to Touch. The following table summarizes the controls that enable touch input and gesture recognition for Internet Explorer versions.

Table 7. Scrolling and Zooming Properties for Internet Explorer* Versions 10 and 11

Scrolling/Zooming property for IE 10 and IE 11TouchscreenTouchpadMouseKeyboard
-ms-scroll-snap-points-x, -ms-scroll-snap-points-y, -ms-scroll-snap-type, -ms-scroll-snap-x, -ms-scroll-snap-yIE 10+IE 11IE 11IE 11
-ms-content-zoom-chaining, msContentZoomFactor, -ms-content-zooming, -ms-content-zoom-limit, -ms-content-zoom-limit-max, -ms-content-zoom-limit-min, -ms-content-zoom-snap, -ms-content-zoom-snap-points, -ms-content-zoom-snap-type, -ms-scroll-chaining, -ms-scroll-railsIE 10+IE 11----
-ms-overflow-style, -ms-scroll-limit, -ms-scroll-limitXMax, -ms-scroll-limitXMin, -ms-scroll-limitYMax, -ms-scroll-limitYMinIE 10+IE 10+IE 10+IE 10+
-ms-scroll-translation----IE 10+--

Identifying Touch Capability

Whether an app is a native app or a web app, the developer will want to add a check for hardware touch capability so that the app can configure its UI appropriately. Use the following methods to test for touch capability.

Windows 7 and Windows 8 Desktop

Apps targeting Windows 7 or Windows 8 Desktop can call GetSystemMetrics with SM_DIGITIZER as the argument. The following code snippet is part of a Touch sample that can be downloaded from the Intel Developer Zone: Touch for Windows Desktop

References:Note: Please refer to the MSDN Terms of Use for licensing details.

 

    // Check for Touch support
    // Get the Touch Capabilities by calling GetSystemMetrics
    BYTE digitizerStatus = (BYTE)GetSystemMetrics(SM_DIGITIZER);
    // Hardware touch capability (0x80); readiness (0x40)
    if ((digitizerStatus & (0x80 + 0x40)) != 0) //Stack Ready + MultiTouch
    {
        RegisterTouchWindow(m_pWindow->GetHWnd(), TWF_WANTPALM);
    }

Figure 2. Windows* 7 Example for identifying touch capability

Note that GetSystemMetrics can be used to find out what is the maximum number of touch points that are available:

   BYTE nInputs = (BYTE)GetSystemMetrics(SM_MAXIMUMTOUCHES);

Windows 8 (Windows Store apps)

Determine touch capabilities for Windows Store apps by using the TouchCapabilities class. The following code snippet can be found in the code sample on MSDN that demonstrates its use: Input: Device capabilities sample.

References:Note: Please refer to the MSDN Terms of Use for licensing details.

 

void SDKSample::DeviceCaps::Touch::TouchGetSettings_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)
{
    Button^ b = safe_cast<button^>(sender);
    if (b != nullptr)
    {
        TouchCapabilities^ pTouchCapabilities = ref new TouchCapabilities();
        Platform::String^ Buffer;

        Buffer = "There is " + (pTouchCapabilities->TouchPresent != 0 ? "a" : "no") + " 
              digitizer present\n";
        Buffer += "The digitizer supports " + pTouchCapabilities->Contacts.ToString() + " 
              contacts\n";

        TouchOutputTextBlock->Text = Buffer;
    }
}

Figure 3. Windows*UI Example for Identifying Touch Capability

Web apps

For Internet Explorer, use the msMaxTouchPoints property described as follows:

  Test for touch capable hardware:
  If (navigator.msMaxTouchPoints) {…}

  Test for multi-touch capable hardware:
  If (navigator.msMaxTouchPoints >1) {…}

  Get the maximum number of touch points the hardware supports:
  Var touchPoints = navigator.msMaxTouchPoints;

For Chrome and Safari, use the following (same as above but replace msMaxTouchPoints with maxTouchPoints):

  var result = navigator.maxTouchPoints;

It can be somewhat tricky to test for touch devices generically from web apps. While some functions work well on some browsers, others indicate that touch is present when it is not, i.e., if the browser itself supports touch, it may report that touch is available even if the device is not touch-capable.

Note that MaxTouchPoints will return 0 in IE 10 (Desktop) running on Windows 7.

References:Note: Please refer to the MSDN Terms of Use for licensing details.

 

UI design for Touch-Enabled Devices

Apps designed for touch-enabled devices may need to process gestures such as taps, pans, zooms, etc. Apps that are touch-enabled may do little with the raw pointer data except to pass it to gesture detection.

New applications should be designed with the expectation that touch will be the primary input method. Mouse and stylus support require no additional work; however, software developers should consider several other factors when designing touch-optimized apps.

Table 8. Considerations for Touch-Enabled Apps

FactorTouchMouse/Stylus
Precision
  • Contact area for fingertip is much larger than a single x-y coordinate.
  • The shape of the contact area changes with the movement
  • There is no mouse cursor to help with targeting
  • Mouse/Stylus gives a precise x-y coordinate
  • Keyboard focus is explicit
Human Anatomy
  • Fingertip movements are imprecise
  • Some areas on the touch surface may be difficult to reach
  • Objects may be obscured by one or more fingertips
  • Straight-line motion with the mouse/stylus are easier to perform
  • Mouse/Stylus can reach any part of the screen
  • Indirect input devices do not cause obstruction
Object state
  • Touch uses a two-state model. The touch surface is either touched or not. There is no hover state that can trigger additional visual feedback.
  • Three states are available: on, off, hover (focus)
Rich interaction
  • Multi-touch – multiple input points (fingertips) are available.
  • Supports only a single input point.

Software developers should supply appropriate visual feedback during interactions so that users can recognize, learn, and adapt to how their interactions are interpreted by both the app and the OS. Visual feedback is important for users to let them know if their interactions are successful, so they can improve their sense of control. It can help reduce errors and help users understand the system and input device.

Tips for Building Optimized, Responsive Apps

  1. The Click-delay: For web apps, this happens when the click event is delayed on mobile devices causing pages to feel slow or unresponsive. This happens because the browser is trying to decide if the click was a tap or if it was a double-tap. The solution to this is to use a good fastclick library. Fastclick libraries listen for touchend.
  2. Avoid expensive operations in touch handlers. Instead of processing the touches immediately, store them and process them later.
  3. Touch handlers can cause “Scroll Jank.” This can happen when there is a touch event handler on a page where scrolling is allowed. If the app needs both scrolling and touch events, make the touch area as small as possible.

Resources for Developing Touch Applications

Related Articles on Intel Developer Zone:

  1. Comparing Touch Coding Techniques – Windows 8 Desktop Touch Sample
  2. Exploring Touch Samples for Windows* 8 apps
  3. Touch Code Sample for Windows* 8 Store
  4. Touch Code Sample for Windows* 8 Desktop
  5. Porting Win32* Apps to Windows* 8 Desktop
  6. Real-Time Strategy Game with Touch Screen
  7. Virtual Trackpad
  8. Mixing Stylus and Touch Input on Windows* 8
  9. Implementing multi-user multi-touch scenarios using WPF in Windows* Desktop Apps

Related Articles on MSDN

  1. Windows 7 Touch Input Programming Guide
  2. Architectural Overview (Windows 7)
  3. Troubleshooting Applications
  4. Adding Manipulation Support in Unmanaged Code
  5. Windows Touch Samples
  6. Build Advanced Touch Apps in Windows 8* (Video)
  7. Windows 8 SDK
  8. Input: Touch hit testing sample
  9. Desktop App Development Documentation (Windows)
  10. Windows Touch Gestures Overview (Windows)
  11. Getting Started with Windows Touch Messages (Windows)
  12. Get PointerTouchInfo function (Windows)
  13. (MSDN) Internet Explorer 10 Guide for Developers
  14. (MSDN) Internet Explorer 11 Guide for Developers
  15. (MSDN) Touchpad Interactions
  16. (MSDN) Scrolling and Zooming with touch and other inputs
  17. (MSDN) Terms of Use

Web Apps

  1. Dragging and Scaling and Object (Sample Code)
  2. Handling Multi-Touch Gestures
  3. W3C Software Notice and License

Videos:

  1. Google I/O 2013 – Point, Click, Tap, Touch – Building Multi-Device Web Interfaces

Summary

Developers who want to develop touch-enabled apps, whether they are native or web apps, need to have a clear understanding of which APIs are available to them. This guide covered the interfaces available for the following environments: Windows 7, Windows 8+ Desktop, Windows Modern UI, as well as apps running in web browsers. While Gestures and Manipulations are possible in Windows 7, app developers may find that the Windows 8+ APIs (those targeted for the Desktop and/or for Windows Store apps) offer the best options for automatic gesture recognition.

Windows 8.1 and IE 11 add touchpad APIs and interactions. Many of the existing APIs used for Gestures, Manipulation, Pointers for Windows 8 Desktop, Windows 8 Store Apps, and IE 11 have been updated as well.

Developers who are writing touch-enabled web apps need to check their code for IE 10+ since IE 10+ has its own interface that must be used to process touch, gestures, and manipulation. Other Webkit-based browsers are also based on the DOM Level 3 standards and have touch and gesture event support.

This guide also covered descriptions of common Gesture and Manipulation interactions and provided some guidelines for developing touch-enabled apps.

About the Author

Gael Hofemeier is an Evangelist Application Engineer at Intel Corporation. Her focus is providing technical content that developers writing software for Intel® Architecture need. In addition to writing content, she also moderates the Business Client Forum on the Intel Developer Zone.

See Gael's Blog Author Page

Notices

INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.

UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR.

Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information.

The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.

Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

Any software source code reprinted in this document is furnished under a software license and may only be used or copied in accordance with the terms of that license.

Intel, the Intel logo, Ultrabook, and Core are trademarks of Intel Corporation in the US and/or other countries.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

What's New? Intel® Threading Building Blocks 4.2 update 3

$
0
0

Changes (w.r.t. Intel TBB 4.2 Update 2):

  • Added support for Microsoft* Visual Studio* 2013.
  • Improved Microsoft* PPL-compatible form of parallel_for for better support of auto-vectorization.
  • Added a new example for cancellation and reset in the flow graph: Kohonen self-organizing map (examples/graph/som).
  • Various improvements in source code, tests, and makefiles.

Bugs fixed:

  • Added dynamic replacement of _aligned_msize() previously missed.
  • Fixed task_group::run_and_wait() to throw invalid_multiple_scheduling exception if the specified task handle is already scheduled.

Open-source contributions integrated:

  • A fix for ARM* processors by Steve Capper.
  • Improvements in std::swap calls by Robert Maynard.

You can download Intel TBB 4.2 update 3 from commercial and open source sites.

 

Digital Security and Surveillance on 4th generation Intel® Core™ processors Using Intel® System Studio

$
0
0

This article presents the advantages of developing embedded digital video surveillance systems to run on 4th generation Intel® Core™ processor with Intel® HD Graphics, in combination with the Intel® System Studio software development suite. While Intel® HD Graphics is useful for developing many types of computer vision functionalities in video management software; Intel® System Studio is an embedded application development suite that is useful in developing robust digital video surveillance applications.

Intel® Integrated Native Developer Experience (Intel® INDE) – Frequently Asked Questions

$
0
0

Table of Contents

  1. What is Intel® INDE?
  2. What are the key features of Intel® INDE?
  3. How can I obtain a copy of the tool?
  4. What are the system hardware and software requirements to install and run the Intel® INDE?
  5. How do I receive licenses to the tools Intel® INDE provides?
  6. How do I receive updates to Intel® INDE and the tools it provides?
  7. What tools does Intel® INDE provide access to?
  8. Which target devices do the tools within Intel INDE support development for?
  9. There's a tool I'd like Intel® INDE to support. How do I share my feedback with Intel?
  10. Is an Internet connection required to use Intel® INDE?

What is Intel® INDE?
beta cross-platform productivity suite built with today's developer in mind that provides developers with tools, support, integration and updates to create high-performance C++/Java* applications for Android* that run on ARM* and run best on Intel® architecture-based devices. The first release of Intel® INDE focuses on delivering tools to support each step of the development chain for Android* targets, with some support for Microsoft Windows* targets. Additional support will be added throughout the year.

What are the key features of Intel® INDE?
Intel® INDE enables faster development by providing tools, samples & libraries for environment setup, code creation, compilation, debugging and analysis. Developers can code quickly with integration of tools into popular IDEs, while being future proof with automatic updates to the latest tools and technology. Intel® INDE supports 64-bit host systems running Microsoft Windows* 7-8.1 and ARM* and Intel® architecture-based target devices running Android 4.3 & up.= and Microsoft Windows* 7-8.1 devices based on Intel® architecture. Intel® INDE is an early access beta available for free at intel.com/software/INDE.

The key features supported in this release are:

  • Develop Quickly: IDE integration of tools into popular IDEs including Eclipse* & vs-android*
  • Easy-to-Use: C++/Java tools & samples for building, compiling, analyzing & debugging Android* applications.
  • Future-proof: automatic updates to the latest tools & technology.

How can I obtain a copy of the tool?
Intel announced INDE at MWC and the productivity suite will be available for download within Q1 of 2014. Please sign up to be alerted of its availability here.

What are the system hardware and software requirements to install and run Intel® INDE?
Intel® INDE runs on 64-bit Microsoft Windows 7-8.1 host systems. Your system must have Intel® Virtualization Technology¹ enabled in BIOS in order to run Intel® Hardware Accelerated Execution Manager (Intel® HAXM). Intel® HAXM is an optional installation in the environment setup component of Intel® INDE.

System Requirements:

  • Microsoft Windows* 7 64-bit or newer
  • 4GB RAM
  • 6GB free disk space

How do I receive licenses for the tools Intel® INDE provides?
Intel® INDE prompts you to view and accept licenses of all tools that you choose to download through Intel® INDE during tool installation.

How do I receive updates for Intel® INDE and the tools it provides?
Intel® INDE checks for software updates on a regular basis. Once a software update becomes available, you will receive a notification through Intel® INDE and will be prompted to install the update on your system.

What tools does Intel® INDE provide access to?
Intel® INDE provides optional installation of popular Intel and third-party tools including:

Intel Tools:

  • Intel® INDE Media pack for Android*
  • Intel® Threading Building Blocks
  • Compute code builder beta
  • Intel® C++ Compiler for Android*
  • System Analyzer – part of Intel® Graphics Performance Analyzers (Intel® GPA)
  • Platform Analyzer – part of Intel® GPA
  • Frame Analyzer – part of Intel® GPA
  • Intel® Frame Debugger beta
  • Environment Setup –includes an optional installation of Intel® Hardware Accelerated Execution Manager (Intel® HAXM)

Third-Party Tools:

  • Google Android* SDK – an optional component that is part of Environment Setup
  • Android* NDK – an optional component that is part of Environment Setup
  • Android* Design – an optional component that is part of Environment Setup
  • Apache Ant* – an optional component that is part of Environment Setup
  • vs-android plug-in for Microsoft* Visual Studio* - an optional installation that is part of Environment Setup

Which target devices do the tools within Intel INDE support development for?
All tools support development for Android* targets beginning with Android 4.3 and up, with the exception of the analyzers and frame debugger, which support Android 4.4 devices and up. The analyzers, compute code builder and threading tools also support Microsoft Windows* 7-8.1 client targets running on Intel architecture-based devices.

There's a tool I'd like Intel® INDE to support. How do I share my feedback with Intel?
Your feedback is important to us. Please submit your feedback to the Intel® INDE forum.

Is an Internet connection required to use Intel® INDE?
A stable internet connection is required to install Intel® INDE and the tools available within Intel® INDE. Once you've downloaded tools from Intel® INDE, an internet connection is not required to use them. It is, however, required to maintain and check for updates and notifications.


1Intel® Virtualization Technology requires a computer system with an enabled Intel® processor, BIOS, and virtual machine monitor (VMM). Functionality, performance or other benefits will vary depending on hardware and software configurations. Software applications may not be compatible with all operating systems. Consult your PC manufacturer. For more information, visit http://www.intel.com/go/virtualization.

Intel® System Studio: Samples and Tutorials

$
0
0

By downloading or copying all or any part of the sample source code, you agree to the terms of the Intel® Sample Source Code License Agreement.

Signal and Image Processing

Classic Algorithms

More Code Samples

Tutorials

  • Stay tuned

A collection of all code samples for the Intel C++ Compiler can be found here (not necessarily specific for Intel® System Studio).

Viewing all 461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>