Quantcast
Channel: Intel Developer Zone Articles
Viewing all 461 articles
Browse latest View live

2015 Beta product activation may fail on existing installations

$
0
0

When installing one of the Beta products,

     Intel® Composer XE 2015 Beta for Windows*
     Intel® Software Development Tools 2015 Beta for Windows*

on a machine where Intel® Software Development Tools are already installed you may see the error

     'No Valid license was found'.

Until this problem is fixed with a later Beta update, apply one of the following workarounds:

  • Rename license files extension .lic to something else during beta software installation and then rename them back afterwards.
  • Create a backup subdirectory and move all the .lic files there during beta software installation and then copy them back afterwards.

The default license directory under Windows is:

     c:\Program Files (x86)\Common Files\Intel\Licenses\Intel\Licenses\ (on 64-bit Windows OS)
     c:\Program Files\Common Files\Intel\Licenses\ (on 32-bit Windows OS)

 

 

 


Echoboom S.L. Now Enables Users to Rule the Sky with Dogfight for Intel® Atom™ Tablets for Windows* 8.1

$
0
0

Tablet users establish air superiority with this World War I flight simulator.

Echoboom S.L. has brought its multiplayer World War I combat flight simulator, Dogfight, to Intel® Atom™ tablets for Windows* 8.1. This highly rated free game offers hours of multiplayer gaming.

Selected by Microsoft as a “Top App” for Windows* 8.1, Dogfight is a multiplayer World War I flight simulator. Players can begin their air combat tour with training levels and then move up to dogfights with enemy planes. Users can choose to play alone or to join the active community of more than 200,000 active pilots for massive online airplane battles.

When creating Dogfight, the developers at Echoboom S.L. benefitted from the resources and support community available through the Intel® Developer Zone.

“When we were developing Dogfight we knew that it would be ideal for tablets with powerful processors and crystal clear HD displays,” says Joaquín Grech, of Echoboom. “The features of the Intel Atom tablets for Windows* 8.1 make them perfect platforms for this app.”

Dogfight is available for immediate download from Microsoft. http://apps.microsoft.com/windows/en-us/app/3bece70f-570c-4ca4-885a-d5ae4d00e1e5

About Echoboom

Echoboom develops mobile applications. Their customized solutions bring together groundbreaking technologies including: geopositioning, cloud computing, social media, and in-app purchases. For more information, please visit its website at: http://www.echoboom.com/

About Intel® Developer Zone

The Intel Developer Zone supports developers and software companies of all sizes and skill levels with technical communities, go-to-market resources and business opportunities. To learn more about becoming an Intel® Software Partner, join the Intel Developer Zone.

 

The Code Secrets behind My Health Assistant

$
0
0

By John Tyrrell

Download PDF

Many people take medication, sometimes multiple times per day, to help them stay healthy. Making sure meds are taken on time and in the right doses requires an individual to be vigilant and disciplined. Software developer Tim Corey saw a way to improve the error-prone process of tracking self-medication by using technology to provide an easy-to-use personal medication assistant, one that never forgot a dose and had a perfect memory. This idea led to his creation of My Health Assistant.

My Health Assistant is an app that helps individuals manage and track their medication use through a simple interface. The app also features a health diary, a GPS-based pharmacy and ER locator, and personal health information.


Figure 1: The main menu of My Health Assistant showing the touch UI.

Corey developed My Health Assistant as an entry in the Intel® App Innovation Contest 2013 hosted by CodeProject in partnership with the Intel® Developer Zone, and the app went on to win in the Health Category. The app was initially built for Microsoft Windows* desktop, with the ultimate target of Windows-based tablets, such as the Lenovo ThinkPad* Tablet 2, Ultrabook™ 2 in 1s running Windows 8.1*, Windows Phone* 8, and other mobile platforms.

Corey’s central development goals were portability, usability, and security. To reach these goals, he had to overcome a number of challenges throughout development, including implementing cross-platform touch UI and securing sensitive medical data. This case study explores those challenges, the solutions Corey applied, and the resources he used.

Decisions and Challenges

Corey took a modular approach to building the app, working on each piece of functionality separately in C# and wiring them together with the XAML Windows Presentation Foundation (WPF) UI using the Model View ViewModel (MVVM) design pattern.

Choosing C#

Prior to making My Health Assistant, Corey had considerable experience working with the object-oriented .NET programming language C#. He chose C# as the main language for building the app (using Microsoft Visual Studio*) for a number of reasons, including Microsoft’s support behind it, which brings an entire ecosystem of tools, libraries, and other resources.

As a key approach to creating a cross-platform app, C# can also be taken into practically any environment: from PC, Linux*, or Apple Mac* to Apple iPhone* or Google Android*. An additional strength of C# is that security and encryption are incorporated deeply in the code, removing what otherwise could be a big hurdle, especially when dealing with sensitive data such as medical records.

Corey considered other language options such as VB.NET, another .NET language, and also Java*. However, in Corey’s opinion, none of them offered the same combination of familiarity and features that C# was able to provide.

C# Libraries

The Microsoft ecosystem around C# includes a large number of optimized libraries that Corey believes greatly simplify the coding process. In My Health Assistant, the app’s data is stored in XML files for ease of cross-platform portability, and because of the way the libraries have been incorporated into the programming framework, Corey was able to write one simple line of code and have all of that data taken care of.

In addition to the Microsoft libraries, many third-party libraries are available for C#. For the UI framework of My Health Assistant, Corey used the Caliburn.Micro, which enabled him to connect the app’s front- and back-end code using MVVM. This approach allowed flexibility when editing the UI, removing the need for comprehensive recoding after making any modifications.

WPF UI

To build the UI, Corey chose the Microsoft WPF system over Windows Forms because of its responsiveness to screen size changes, a vital ingredient of cross-platform development. With Windows 8 desktop and Windows Phone 8 both using WPF, Corey was quickly able to produce different versions of the app for each platform without major UI recoding.

In practice, the responsive WPF UI serves up elements in a particular number and size according to the available screen real estate. A full desktop view will display the full complement of buttons, whereas a mobile view will show only one or two with the others moved to a drop-down menu.

Overcoming Touch and Scrolling Problems

Any app for portable devices, whether for smartphones, tablets, or Ultrabook devices, needs effective touch controls. While the desktop app works well with a mouse, Corey specifically designed it for touch, ensuring that simple actions such as scrolling through a menu worked well with a single finger. He even disabled the scroll bar to encourage finger scrolling.

The biggest hurdle that Corey faced during development was implementing the menu scrolling in the touch UI. The app needed to be told precisely the screen orientation and the size of the available screen real estate for menus and other elements; otherwise the app would assume that more space was available, rendering key elements such as menu buttons invisible and hence useless.

To enable touch scrolling in WPF, Corey added an attribute to the ScrollViewer that indicates the PanningMode, as in the code snippet below.

<ScrollViewer Grid.Row="2" HorizontalScrollBarVisibility="Disabled"
              VerticalScrollBarVisibility="Hidden" HorizontalAlignment="Center"
              PanningMode="VerticalOnly" Margin="0,0,0,0">

GPS Locator

One main feature of My Health Assistant is its ability to help users find the closest pharmacy or emergency room wherever they are. This feature uses the device’s GPS functionality and the Google Maps* API combined with relevant location data pulled in through an API to serve up accurate and relevant information on maps.


Figure 2: Google Maps* API integration lets users easily locate their nearest pharmacy or ER.

The code below is the class that holds the GPS code, which is responsible for acquiring the coordinates and raising an event once they have been located. It is an asynchronous transaction, which means that the app continues running normally while the coordinates are located.

public class GPSLocator
{
    public GeoCoordinateWatcher _geolocator { get; set; }

    public GPSLocator()
    {
        // Initializes the class when this class is loaded
        _geolocator = new GeoCoordinateWatcher();
    }

    // Asynchronously loads the current location into the private variables and
    // then alerts the user by raising an event
    public void LoadLocation()
    {
        try
        {
            _geolocator = new GeoCoordinateWatcher(GeoPositionAccuracy.Default);
        }
        catch (Exception)
        {
        }
    }
}

The following section of code calls the GPSLocator class and serves up the coordinates asynchronously. This code also provides the option of continuously acquiring new GPS coordinates, but, in the case of My Health Assistant, Corey assumed that the user would be stationary and hence would need only one set of coordinates. However, the GPS service could be left running to provide continually updated coordinates.

// Initializes the GPS
gps = new GPSLocator();

// Loads the watcher into the public property
gps.LoadLocation();

// Wires up the code that will be fired when the GPS coordinates are found.
// Finding the coordinates takes a couple seconds, so even though this code
// is here, it won't get fired right away. Instead, it will happen at the end
// of the process.
gps._geolocator.PositionChanged += (sensor, changed) =>
{
    // This code uses an internal structure to save the coordinates
    currentPosition = new Position();
    currentPosition.Latitude = changed.Position.Location.Latitude;
    currentPosition.Longitude = changed.Position.Location.Longitude;

    // This notifies my front-end that a couple of my properties have changed
    NotifyOfPropertyChange(() => CurrentLatitude);
    NotifyOfPropertyChange(() => CurrentLongitude);

    // A check is fired here to be sure that the position is correct (not zero).
    // If it is correct, we stop the GPS service (since it will continue to give
    // us new GPS coordinates as it finds them, which isn't what we need). If
    // the latitude or longitude are zero, we keep the GPS service running until
    // we find the correct location.
    if (currentPosition.Latitude != 0 && currentPosition.Longitude != 0)
    {
        gps._geolocator.Stop();
        LoadPharmacies();
    }
};

// This is where we actually kick off the locator. If you do not run this line,
// nothing will happen.
gps._geolocator.Start();

API Integration

For serving up local information on pharmacies and hospitals, Corey knew that choosing the right API was critical, because My Health Assistant had to be able to provide accurate information anywhere in the world, not just the United States. Corey considered several potentially good APIs, including the Walgreens and GoodRx APIs, but had to discount them because they didn’t work outside the United States. Corey ultimately chose the Factual Global Places API database, which contains global information. He was able to test its effectiveness while at a conference in Spain: a request for the nearest pharmacies produced a list of places within a couple of miles of his location.


Figure 3: Users can store their personal doctor, pharmacy, and insurance information in the app.

Security Choices

Alongside portability, Corey cites security as the second key pillar of the application. The app’s default setting is to store data locally, which presents a relatively low security risk. However, during testing Corey found that users wanted the ability to access the stored data when using the app on different devices, which implied a cloud-based data storage solution and hence increased risk.

For cloud backup of the XML data files, rather than implement a complex API-driven solution into the app itself, Corey took the simpler route of adding cloud-based save options into the File Explorer view alongside local options. This approach made data backup intuitive while relying on encryption and the user’s own trust in the service of their choice for the security of the data, whether that be Microsoft SkyDrive*, Dropbox*, Box.net, or another service. The screen shot below shows how the cloud-storage save options appear in File Explorer view.


Figure 4: Users can back up data locally or directly to their chosen cloud service using File Explorer.

Saving Data

Initially, Corey had difficulties with the backup functionality, having started down the path of implementing a complex storage and retrieval mechanism for the XML files. However, a friend gave him a simple yet powerful piece of code, which immediately solved the problem.

All of the app’s data is saved to XML files that are then loaded at runtime to repopulate the application with the data. Below is the code that actually saves the data.

// This method saves the data to a XML file from a class instance that
// has been passed into this method. It uses generics (the T
// that is all over in here) so that any type can be used to pass
// data into this method to be saved.
public static void SaveData<T>(string filePath, T data)
{
    // Wraps the FileStream call in a using statement in order to ensure that the
    // resources used will be closed properly when they are no longer needed.
    // This file is created in read/write mode.
    using (FileStream fs = new FileStream(filePath, FileMode.Create, FileAccess.ReadWrite))
    {
        // This uses the XmlSerializer to convert our data into XML format
        XmlSerializer xs = new XmlSerializer(typeof(T));
        xs.Serialize(fs, data);
        fs.Close();
    }
}

Below is the code that subsequently loads the data.

// This method loads the data from a XML file and converts it back into
// an instance of the passed in class or object. It uses generics (the T
// that is all over in here) so that any type can be passed in as the
// type to load.
public static T LoadData<T>(string filePath)
{
    // This is what we are going to return. We initialize it to the default
    // value for T, which as a class is null. This way we can return the
    // output even if the file does not exist and we do not load anything.
    T output = default(T);

    // Checks first to be sure that the file exists. If not, don't do anything.
    if (File.Exists(filePath))
    {
        // Wraps the FileStream call in a using statement in order to ensure that the
        // resources used will be closed properly when they are no longer needed.
        // This opens the file in read-only mode.
        using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
        {
            // This uses the XmlSerializer to convert the XML document back into
            // the class format.
            XmlSerializer xs = new XmlSerializer(typeof(T));
            output = (T)xs.Deserialize(fs);
            fs.Close();
        }
    }

    // Returns the class instance, loaded with the data if the file was found
    return output;
}

The way the code works in the app is simple. First, the following command is used to save data stored in a class to an XML file on disk.

_records = Data.LoadData<RecordsPanelViewModel>(filePath);

The command below is used to load the data back into the class from the XML file, for example, when the application is launched and it needs to bring the data back in.

Data.SaveData<RecordsPanelViewModel>(FilePath, this);

Testing

Corey’s first line of attack when testing the app during the early stages of development was to come up with as many bad inputs as he could think of. At various stages he handed the app to several friends and family members, which proved to be an effective way to identify issues from an unbiased user perspective and refine the UI and overall usability.

The modular development approach made the process of rearranging the UI particularly quick and straightforward, allowing rapid iteration in response to feedback Corey received.

A number of UI bugs caused problems, particularly with the scrolling, which initially didn’t work in the way that Corey had expected. Another bug he fixed occurred with the medication dosage counter. For example, a 6-hour countdown for one medication would then cause the countdown for the subsequent medication to begin at 5 hours 59 minutes instead of 6 hours.

Corey describes the debugging process as rough and long-winded, in contrast to the more satisfying process of actually building the app itself, but he has yet to encounter an issue for which he couldn’t find a solution.

Next Steps

At the time of this writing, Corey is targeting a summer 2014 release for My Health Assistant. The initial launch will be for Windows 8 Desktop, followed by Windows Phone 8 and then other mobile platforms, including iOS and Android. Post-launch, Corey looks forward to gathering user feedback and using that feedback to iterate and improve the app over time.

Another feature that Corey is investigating is the integration of a medication lookup API to provide users with information about particular pharmaceutical drugs and where to find them at the best prices. GoodRx is an example of an API that provides this within the United States, but the search is ongoing for a solution that is effective worldwide.

Conclusion

Knowledge Growth

While Corey had worked with XAML prior to My Health Assistant, most of it was on a much simpler level. Working on the app allowed him to significantly grow his XAML knowledge and learn how to design and build better apps for the future.

In addition to XAML, Corey also greatly expanded his working knowledge of Caliburn.Micro, the framework that Rob Eisenberg created for WPF and XAML. Despite a relatively steep learning curve, Corey considers the knowledge he has gained invaluable in terms of how to make things work within the disconnected framework environment.

Key Learnings

When giving advice to his software development students, Corey emphasizes the need for good planning—an approach to development that has been reinforced through his work on My Health Assistant. The experience taught him that more time spent in the design phase means less in development and debugging.

During the design process, which involved a lot of iterating on paper, Corey frequently threw things out. Casting aside ideas and features would have been more difficult to do in the development stage resulting in wasted time. Iterating during the preliminary design phase proved to be much more efficient.

Corey also learned the danger of making technical assumptions about how things would work and not testing them prior to coding them into the app. A number of times during development, Corey found that certain things, such as scrolling, didn’t behave in the way he expected, resulting in having to discard the code. Corey recommends building small applications throughout the development phase to test assumptions regarding specific functionality and behavior.

About the Developer

Tim Corey began his career as a software developer and IT professional in the late 90s, with roles as a programmer and IT director. In 2011 Corey graduated from South University with a bachelor’s degree in IT and Database Administration. Subsequent roles have included lead programming analyst for an insurance group. Corey is currently the lead technical consultant at Epicross consulting firm, his own business, which aims to help organizations achieve greater IT efficiency through the optimization of their existing technology. He also teaches software development.

Helpful Resources

Corey relies heavily on a variety of external resources to find solutions to problems. Chief among these is Pluralsight, which offers subscription-based video training. Corey often watches 3 or 4 hours of video at a time when he’s learning a new topic or working to improve existing skills.

For specific problems, Corey often visits CodeProject and searches for answers among its vast collection of articles, in its tips and tricks section, or by asking a question on the forums. Corey’s other go-to resource is Stack Overflow, which he considers to be the Wikipedia of software developers, and where almost any question imaginable has already been answered, often multiple times.

Corey occasionally uses Microsoft documentation, and also regularly turns to Twitter for support. He either throws questions out to his Twitter followers or directs queries to specific individuals, such as Rob Eisenberg, whom Corey cites as a particularly big help when called upon.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed.  Join our communities for the Internet of Things, Android, Intel® RealSense™ Technology and Windows to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

Related articles:

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others. 
Copyright © 2014. Intel Corporation. All rights reserved.

Developing Games with MonoGame*

$
0
0

By Bruno Sonnino

Download article as PDF

Developers everywhere want to develop games. And why not? Games are among the best sellers in computer history, and the fortunes involved in the game business keep attracting developers to it. As a developer, I’d certainly like to be the one who develops the next Angry Birds* or Halo*.

In practice, however, game development is one of the most difficult areas of software development. You have to remember those trigonometry, geometry, and physics classes that you thought you’d never use. Besides that, your game must combine sound, video, and a story in a way that the user will want to play it more and more. And all that is before you write a single line of code!

To make things easier, frameworks are available for developing games using not only C and C++ but even C# or JavaScript* (yes, you can develop three-dimensional games for your browser using HTML5 and JavaScript).

One of these frameworks is Microsoft XNA*, which builds on Microsoft DirectX* technology, allowing you to create games for Xbox 360*, Windows*, and Windows Phone*. Microsoft has phased out XNA, but meanwhile, the open source community has introduced a new player: MonoGame*.

What Is MonoGame?

MonoGame is an open source implementation of the XNA application programming interface (API). It implements the XNA API not only for Windows but also for Mac* OS X*, Apple iOS*, Google Android*, Linux*, and Windows Phone. That means you can develop a game for all those platforms with only a few minor changes. That’s a wonderful feature: you can create games using C# that can be ported easily to all major desktop, tablet, and smartphone platforms. It’s a great start for those who want to conquer the world with their games.

Installing MonoGame on Windows

You don’t even need Windows to develop with MonoGame. You can use MonoDevelop* (an open source cross-platform integrated development environment [IDE] for Microsoft .NET languages) or Xamarin Studio*, a cross-platform IDE developed by Xamarin. With these IDEs, you can develop using C# on Linux or Mac.

If you are a Microsoft .NET developer and you use Microsoft Visual Studio* on a daily basis, as I do, you can install MonoGame in Visual Studio and use it to create your games. At the time of this writing, the latest stable version of MonoGame is version 3.2 This version runs in Visual Studio 2012 and 2013 and allows you to create a DirectX desktop game, which you will need if you want to support touch in the game.

The installation of MonoGame comes with many new templates in Visual Studio that you can choose to create your games, as shown in Figure 1.

Figure 1. New MonoGame* templates

Now, to create your first game, click MonoGame Windows Project and then select a name. Visual Studio creates a new project with all the files and references needed. If you run this project, you’ll get something like Figure 2.

Figure 2.Game created in a MonoGame* template

Dull, isn’t it? Just a blue screen, but this is the start for any game you build. Press Esc, and the window closes.

You can start writing your game with the project you have now, but there is a catch. You won’t be able to add any assets (images, sprites, sounds, or fonts) without compiling them to a format compatible with MonoGame. For that, you need one of these options:

  • Install XNA Game Studio 4.0
  • Install the Windows Phone 8 software development kit (SDK)
  • Use an external program like XNA content compiler

XNA Game Studio

XNA Game Studio has everything you need to create XNA games for Windows and Xbox 360. It also has a content compiler that can compile the assets to .xnb files, then compile all the files for your MonoGame project. Currently, you can install the compiler only in Visual Studio 2010. If you don’t want to install Visual Studio 2010 just for that purpose, you can install XNA Game Studio in Visual Studio 2012 (see the link in the “For More Information” section of this article).

Windows Phone 8 SDK

You can’t install XNA Game Studio directly in Visual Studio 2012, but the Windows Phone 8 SDK installs fine in Visual Studio 2012. You can use it to create a project to compile your assets.

XNA Content Compiler

If you don’t want to install an SDK to compile your assets, you can use the XNA content compiler (see the link in “For More Information”), an open source program that can compile your assets to .xnb files that can be used in MonoGame.

Create Your First Game

The previous game that was created with the MonoGame template is the starting point for all games. You will use the same process to create all your games. In Program.cs, you have the Main function. That function initializes and runs the game:

static void Main()
{
    using (var game = new Game1())
        game.Run();
}

Game1.cs is the core of the game. There, you have two methods that are called in a loop 60 times per second: Update and Draw. In Update, you recalculate data for all the elements in the game; in Draw, you draw these elements. Note that this is a tight loop. You have 1/60th of a second, or 16.7 milliseconds, to calculate and draw the data. If you take more than that, the program may skip some Draw cycles, and you will see graphical glitches in your game.

Until recently, the input for games on desktop computers was the keyboard and mouse. Unless the user had purchased extra hardware, like driving wheels or joysticks, you could not assume that there was any other input method. With the new hardware now available, like Ultrabook™ devices, Ultrabook 2 in 1s, and all-in-one PCs, your options have changed. You can use touch input and sensors, giving users a more immersive and realistic game.

For this first game, we will create a penalty shootout soccer game. The user will use touch to “kick” the ball, and the computer goalkeeper will try to catch the ball. The direction and speed of the ball will be determined by the user’s flick. The computer goalkeeper will choose a random side and velocity to catch the ball. Each goal scored results in one point. Otherwise, the goalkeeper gets the point.

Add Content to the Game

The first step in the game is to add content. Start by adding the background field and the ball. To do so, create two .png files: one for the soccer field (Figure 3) and the other for the ball (Figure 4).

 

Figure 3.The soccer field

 

 

Figure 4. The soccer ball

To use these files in the game, you must compile them. If you are using XNA Game Studio or the Windows Phone 8 SDK, you must create an XNA content project. That project doesn’t need to be in the same solution. You’ll use it only to compile the assets. Add the images to this project, and build it. Then, go to the project target directory and copy the resulting .xnb files to your project.

I prefer to use the XNA Content Compiler, which doesn’t require a new project and allows you to compile the assets as needed. Simply open the program, add the files to the list, select the output directory, and click Compile. The .xnb files are ready to be added to the project.

Content Files

When the .xnb files are available, add them to the Content folder of your game. You must set the build action for each file as Content and the Copy to Output Directory to Copy if Newer. If you don’t do that, you will get an error when you try to load the assets.

Create two fields in which to store the textures of the ball and the field:

private Texture2D _backgroundTexture;
private Texture2D _ballTexture;

These fields are loaded in the LoadContent method:

protected override void LoadContent()
{
    // Create a new SpriteBatch, which can be used to draw textures.
    _spriteBatch = new SpriteBatch(GraphicsDevice);

    // TODO: use this.Content to load your game content here
    _backgroundTexture = Content.Load<Texture2D>("SoccerField");
    _ballTexture = Content.Load<Texture2D>("SoccerBall");
}

Note that the names of the textures are the same as the files in the Content folder but without the extension.

Next, draw the textures in the Draw method:

protected override void Draw(GameTime gameTime)
{
    GraphicsDevice.Clear(Color.Green);

    // Set the position for the background
    var screenWidth = Window.ClientBounds.Width;
    var screenHeight = Window.ClientBounds.Height;
    var rectangle = new Rectangle(0, 0, screenWidth, screenHeight);
    // Begin a sprite batch
    _spriteBatch.Begin();
    // Draw the background
    _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White);
    // Draw the ball
    var initialBallPositionX = screenWidth / 2;
    var ínitialBallPositionY = (int)(screenHeight * 0.8);
    var ballDimension = (screenWidth > screenHeight) ?
        (int)(screenWidth * 0.02) :
        (int)(screenHeight * 0.035);
    var ballRectangle = new Rectangle(initialBallPositionX, ínitialBallPositionY,
        ballDimension, ballDimension);
    _spriteBatch.Draw(_ballTexture, ballRectangle, Color.White);
    // End the sprite batch
    _spriteBatch.End();
    base.Draw(gameTime);
}

This method clears the screen with a green color, and then it draws the background and the ball at the penalty mark. The first spriteBatch Draw method draws the background resized to the size of the window—position 0,0; the second method draws the ball at the penalty mark. It is resized to be proportional to the window size. There is no movement here because the positions don’t change. The next step is to move the ball.

Move the Ball

To move the ball, you must recalculate its position for each iteration in the loop and draw it in the new position. Perform the calculation of the new position in the Update method:

protected override void Update(GameTime gameTime)
{
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
        Keyboard.GetState().IsKeyDown(Keys.Escape))
        Exit();

    // TODO: Add your update logic here
    _ballPosition -= 3;
    _ballRectangle.Y = _ballPosition;
    base.Update(gameTime);

}

The ball position is updated in every loop by subtracting three pixels. If you want to make the ball move faster, you must subtract more pixels. The variables _screenWidth, _screenHeight, _backgroundRectangle, _ballRectangle, and _ballPosition are private fields, initialized in the ResetWindowSize method:

private void ResetWindowSize()
{
    _screenWidth = Window.ClientBounds.Width;
    _screenHeight = Window.ClientBounds.Height;
    _backgroundRectangle = new Rectangle(0, 0, _screenWidth, _screenHeight);
    _initialBallPosition = new Vector2(_screenWidth / 2.0f, _screenHeight * 0.8f);
    var ballDimension = (_screenWidth > _screenHeight) ?
        (int)(_screenWidth * 0.02) :
        (int)(_screenHeight * 0.035);
    _ballPosition = (int)_initialBallPosition.Y;
    _ballRectangle = new Rectangle((int)_initialBallPosition.X, (int)_initialBallPosition.Y,
        ballDimension, ballDimension);
}

This method resets all variables that depend on the window size. It is called in the Initialize method:

protected override void Initialize()
{
    // TODO: Add your initialization logic here
    ResetWindowSize();
    Window.ClientSizeChanged += (s, e) => ResetWindowSize();
    base.Initialize();
}

This method is called in two different places: at the beginning of the process and every time the window size changes. Initialize handles ClientSizeChanged, so when the window size changes, the variables that depend on the window size are reevaluated and the ball is repositioned to the penalty position.

If you run the program, you will see that the ball moves in a straight line but doesn’t stop when the field ends. You can reposition the ball when it reaches the goal with the following code:

protected override void Update(GameTime gameTime)
{
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
        Keyboard.GetState().IsKeyDown(Keys.Escape))
        Exit();

    // TODO: Add your update logic here
    _ballPosition -= 3;
    if (_ballPosition < _goalLinePosition)
        _ballPosition = (int)_initialBallPosition.Y;

    _ballRectangle.Y = _ballPosition;
    base.Update(gameTime);

}

The _goalLinePosition variable is another field, initialized in the ResetWindowSize method:

_goalLinePosition = _screenHeight * 0.05;

You must make one other change in the Draw method: remove all the calculation code.

protected override void Draw(GameTime gameTime)
{
    GraphicsDevice.Clear(Color.Green);

   var rectangle = new Rectangle(0, 0, _screenWidth, _screenHeight);
    // Begin a sprite batch
    _spriteBatch.Begin();
    // Draw the background
    _spriteBatch.Draw(_backgroundTexture, rectangle, Color.White);
    // Draw the ball

    _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White);
    // End the sprite batch
    _spriteBatch.End();
    base.Draw(gameTime);
}

The movement is perpendicular to the goal. If you want the ball to move at an angle, create a _ballPositionX field and increment it (to move to the right) or decrement it (to move to the left). A better way is to use a Vector2 for the ball position, like this:

protected override void Update(GameTime gameTime)
{
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
        Keyboard.GetState().IsKeyDown(Keys.Escape))
        Exit();

    // TODO: Add your update logic here
    _ballPosition.X -= 0.5f;
    _ballPosition.Y -= 3;
    if (_ballPosition.Y < _goalLinePosition)
        _ballPosition = new Vector2(_initialBallPosition.X,_initialBallPosition.Y);
    _ballRectangle.X = (int)_ballPosition.X;
    _ballRectangle.Y = (int)_ballPosition.Y;
    base.Update(gameTime);

}

If you run the program, it will show the ball moving at an angle (Figure 5). The next step is to make the ball move when the user flicks it.

Figure 5. Game with the ball moving

Touch and Gestures

In this game, the motion of the ball must start with a touch flick. This flick determines the direction and velocity of the ball.

In MonoGame, you can get touch input using the TouchScreen class. You can use the raw input data or the Gestures API. The raw input data is more flexible because you can process all input data the way you want, while the Gestures API transforms this raw data into filtered gestures so that you receive input only for the gestures you want.

Although the Gestures API is easier to use, there are some cases when it can’t be used. For example, if you want to detect a special gesture, like an X shape or multifinger gestures, you will need to use the raw data.

For this game, we only need the flick, and the Gestures API supports that, so we will use it. The first thing to do is indicate which gestures you want by using the TouchPanel class. For example, the code:

TouchPanel.EnabledGestures = GestureType.Flick | GestureType.FreeDrag;

. . . makes MonoGame detect and notify you of flicks and drags only. Then, in the Update method, you can process the gestures as follows:

if (TouchPanel.IsGestureAvailable)
{
    // Read the next gesture
    GestureSample gesture = TouchPanel.ReadGesture();
    if (gesture.GestureType == GestureType.Flick)
    {…
    }
}

First, determine whether any gesture is available. If so, you can call ReadGesture to get and process it.

Initiate Movement with Touch

First, enable flick gestures in the game using the Initialize method:

protected override void Initialize()
{
    // TODO: Add your initialization logic here
    ResetWindowSize();
    Window.ClientSizeChanged += (s, e) => ResetWindowSize();
    TouchPanel.EnabledGestures = GestureType.Flick;
    base.Initialize();
}

Until now, the ball has kept moving while the game was running. Use a private field, _isBallMoving, to tell the game when the ball is moving. In the Update method, when the program detects a flick, you set _isBallMoving to True, and the movement starts. When the ball reaches the goal line, set _isBallMoving to False and reset the ball position:

protected override void Update(GameTime gameTime)
{
    if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed ||
        Keyboard.GetState().IsKeyDown(Keys.Escape))
        Exit();

    // TODO: Add your update logic here
    if (!_isBallMoving && TouchPanel.IsGestureAvailable)
    {
        // Read the next gesture
        GestureSample gesture = TouchPanel.ReadGesture();
        if (gesture.GestureType == GestureType.Flick)
        {
            _isBallMoving = true;
            _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f;
        }
    }
    if (_isBallMoving)
    {
        _ballPosition += _ballVelocity;
        // reached goal line
        if (_ballPosition.Y < _goalLinePosition)
        {
            _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
            _isBallMoving = false;
            while (TouchPanel.IsGestureAvailable)
                TouchPanel.ReadGesture();
        }
        _ballRectangle.X = (int) _ballPosition.X;
        _ballRectangle.Y = (int) _ballPosition.Y;
    }
    base.Update(gameTime);

}

The ball increment is no longer constant: the program uses the _ballVelocity field to set the ball velocity in the x and y directions. Gesture.Delta returns the variation of movement since the last update. To calculate the velocity of the flick, multiply this vector by the TargetElapsedTime property.

If the ball is moving, the _ballPosition vector is incremented by the velocity (in pixels per frame) until the ball reaches the goal line. The following code:

_isBallMoving = false;
while (TouchPanel.IsGestureAvailable)
    TouchPanel.ReadGesture();

. . . does two things: it stops the ball and removes all gestures for the input queue. If you don’t do that, the user can flick while the ball is moving, making it move again after the ball has stopped.

When you run the game, you can flick the ball, and it will move in the direction you flicked with the speed of the flick. There is one catch here, however. The code doesn’t detect where the flick occurred. You can flick anywhere on the screen (not just in the ball), and the ball will start moving. You could use gesture.Position to detect the position of the flick, but that property always returns 0,0, so you shouldn’t use it.

The solution is to use the raw input, get the touch point, and see if it is near the ball. The following code determines whether the touch input hits the ball. If it does, the gesture sets the _isBallHit field:

TouchCollection touches = TouchPanel.GetState();

TouchCollection touches = TouchPanel.GetState();

if (touches.Count > 0 && touches[0].State == TouchLocationState.Pressed)
{
    var touchPoint = new Point((int)touches[0].Position.X, (int)touches[0].Position.Y);
    var hitRectangle = new Rectangle((int)_ballPositionX, (int)_ballPositionY, _ballTexture.Width,
        _ballTexture.Height);
    hitRectangle.Inflate(20,20);
    _isBallHit = hitRectangle.Contains(touchPoint);
}

Then, the movement starts only if the _isBallHit field is True:

if (TouchPanel.IsGestureAvailable && _isBallHit)

If you run the game, you will only be able to move the ball if the flick starts on it. There is still one issue here, though: if you hit the ball too slowly or in any direction where it won’t hit the goal line, the game ends because the ball will never return to the start position. You must set a timeout for the ball movement. When the timeout is reached, the game repositions the ball.

The Update method has one parameter: gameTime. If you store the gameTime value when the movement starts, you can know the actual time the ball is moving and reset the game after the timeout:

if (gesture.GestureType == GestureType.Flick)
{
    _isBallMoving = true;
    _isBallHit = false;
    _startMovement = gameTime.TotalGameTime;
    _ballVelocity = gesture.Delta*(float) TargetElapsedTime.TotalSeconds/5.0f;
}

...

var timeInMovement = (gameTime.TotalGameTime - _startMovement).TotalSeconds;
// reached goal line or timeout
if (_ballPosition.Y <' _goalLinePosition || timeInMovement > 5.0)
{
    _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
    _isBallMoving = false;
    _isBallHit = false;
    while (TouchPanel.IsGestureAvailable)
        TouchPanel.ReadGesture();
}

Add a Goalkeeper

The game is now working, but it needs an element of difficulty: you must add a goalkeeper who will keep moving after the user kicks the ball. The goalkeeper is a .png file that the XNA Content Compiler compiles (Figure 6). You must add this compiled file to the Content folder, set its build action to Content, and set Copy to Output Directory to Copy if Newer.

Figure 6.The goalkeeper

The goalkeeper is loaded in the LoadContent method:

protected override void LoadContent()
{
    // Create a new SpriteBatch, which can be used to draw textures.
    _spriteBatch = new SpriteBatch(GraphicsDevice);

    // TODO: use this.Content to load your game content here
    _backgroundTexture = Content.Load<Texture2D>("SoccerField");
    _ballTexture = Content.Load<Texture2D>("SoccerBall");
    _goalkeeperTexture = Content.Load<Texture2D>("Goalkeeper");
}

Then, you must draw it in the Draw method:

protected override void Draw(GameTime gameTime)
{

    GraphicsDevice.Clear(Color.Green);

    // Begin a sprite batch
    _spriteBatch.Begin();
    // Draw the background
    _spriteBatch.Draw(_backgroundTexture, _backgroundRectangle, Color.White);
    // Draw the ball
    _spriteBatch.Draw(_ballTexture, _ballRectangle, Color.White);
    // Draw the goalkeeper
    _spriteBatch.Draw(_goalkeeperTexture, _goalkeeperRectangle, Color.White);
    // End the sprite batch
    _spriteBatch.End();
    base.Draw(gameTime);
}

_goalkeeperRectangle has the rectangle of the goalkeeper in the window. It is changed in the Update method:

protected override void Update(GameTime gameTime)
{…

   _ballRectangle.X = (int) _ballPosition.X;
   _ballRectangle.Y = (int) _ballPosition.Y;
   _goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY,
                    _goalKeeperWidth, _goalKeeperHeight);
   base.Update(gameTime);
}

The _goalkeeperPositionY, _goalKeeperWidth, and _goalKeeperHeight fields are updated in the ResetWindowSize method:

private void ResetWindowSize()
{…
    _goalkeeperPositionY = (int) (_screenHeight*0.12);
    _goalKeeperWidth = (int)(_screenWidth * 0.05);
    _goalKeeperHeight = (int)(_screenWidth * 0.005);
}

The initial goalkeeper position is in the center of the screen, at the top near the goal line:

_goalkeeperPositionX = (_screenWidth - _goalKeeperWidth)/2;

The goalkeeper will start moving when the ball does. It will keep moving from one side to the other in a harmonic motion. This sine curve describes its movement:

X = A * sin(at + δ)

where A is the movement amplitude (the goal width), t is the time of the movement, and a and δ are random coefficients (this will make the movement somewhat random so the user can’t predict the speed and side that the goalkeeper will take).

The coefficients are calculated when the user kicks the ball with a flick:

if (gesture.GestureType == GestureType.Flick)
{
    _isBallMoving = true;
    _isBallHit = false;
    _startMovement = gameTime.TotalGameTime;
    _ballVelocity = gesture.Delta * (float)TargetElapsedTime.TotalSeconds / 5.0f;
    var rnd = new Random();
    _aCoef = rnd.NextDouble() * 0.005;
    _deltaCoef = rnd.NextDouble() * Math.PI / 2;
}

The a coefficient is the velocity of the goalkeeper, a number between 0 and 0.005 that represents a velocity between 0 and 0.3 pixels/seconds (maximum of 0.005 pixels in 1/60th of a second). The delta coefficient is a number between 0 and pi/2. When the ball is moving, you update the goalkeeper’s position:

if (_isBallMoving)
{
    _ballPositionX += _ballVelocity.X;
    _ballPositionY += _ballVelocity.Y;
    _goalkeeperPositionX = (int)((_screenWidth * 0.11) *
                      Math.Sin(_aCoef * gameTime.TotalGameTime.TotalMilliseconds +
                      _deltaCoef) + (_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11);…
}

The amplitude of the movement is _screenWidth * 0.11 (the size of the goal). Add (_screenWidth * 0.75) / 2.0 + _screenWidth * 0.11 to the result, so that the goalkeeper moves in front of the goal. Now, it’s time to make the goalkeeper catch the ball.

Hit Testing

If you want to know whether the goalkeeper catches the ball, you have to know whether the ball rectangle intersects the goalkeeper’s rectangle. You do this in the Update method, after you calculate the two rectangles:

_ballRectangle.X = (int)_ballPosition.X;
_ballRectangle.Y = (int)_ballPosition.Y;
_goalkeeperRectangle = new Rectangle(_goalkeeperPositionX, _goalkeeperPositionY,
    _goalKeeperWidth, _goalKeeperHeight);
if (_goalkeeperRectangle.Intersects(_ballRectangle))
{
    ResetGame();
}

ResetGame is just a refactoring of the code to reset the game to its initial state:

private void ResetGame()
{
    _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
    _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2;
    _isBallMoving = false;
    _isBallHit = false;
    while (TouchPanel.IsGestureAvailable)
        TouchPanel.ReadGesture();
}

With this simple code, the game knows whether the goalkeeper caught the ball. Now, you must know whether the ball hit the goal. You do this when the ball passes the goal line.

var isTimeout = timeInMovement > 5.0;
if (_ballPosition.Y < _goalLinePosition || isTimeout)
{
    bool isGoal = !isTimeout &&
        (_ballPosition.X > _screenWidth * 0.375) &&
        (_ballPosition.X < _screenWidth * 0.623);
    ResetGame();
}

The ball must be completely in the goal, so its position must start after the first goal post (_screenWidth * 0.375) and must end before the second goal post (_screenWidth * 0.625 − _screenWidth * 0.02). Now it’s time to update the game score.

Add Scorekeeping

To add scorekeeping to the game, you must add a new asset: a spritefont with the font used in the game. A spritefont is an .xml file describing the font—the font family, its size and weight, along with some other properties. In the game, you can use a spritefont like this:

<?xml version="1.0" encoding="utf-8"?><XnaContent xmlns:Graphics="Microsoft.Xna.Framework.Content.Pipeline.Graphics"><Asset Type="Graphics:FontDescription"><FontName>Segoe UI</FontName><Size>24</Size><Spacing>0</Spacing><UseKerning>false</UseKerning><Style>Regular</Style><CharacterRegions><CharacterRegion><Start> </Star><End></End></CharacterRegion></CharacterRegions></Asset></XnaContent>

You must compile this .xml file with XNA Content Compiler and add the resulting .xnb file to the Content folder of the project; set its build action to Content and the Copy to Output Directory to Copy if Newer. The font is loaded in the LoadContent method:

_soccerFont = Content.Load<SpriteFont>("SoccerFont");

In ResetWindowSize, reset the position of the score:

var scoreSize = _soccerFont.MeasureString(_scoreText);
_scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);

To keep score, declare two variables: _userScore and _computerScore. The _userScore variable is incremented when a goal occurs, and _computerScore is incremented when the ball goes out, there is a timeout, or the goalkeeper catches the ball:

if (_ballPosition.Y < _goalLinePosition || isTimeout)
{
    bool isGoal = !isTimeout &&
                  (_ballPosition.X > _screenWidth * 0.375) &&
                  (_ballPosition.X < _screenWidth * 0.623);
    if (isGoal)
        _userScore++;
    else
        _computerScore++;
    ResetGame();
}
…
if (_goalkeeperRectangle.Intersects(_ballRectangle))
{
    _computerScore++;
    ResetGame();
}

ResetGame re-creates the score text and sets its position:

private void ResetGame()
{
    _ballPosition = new Vector2(_initialBallPosition.X, _initialBallPosition.Y);
    _goalkeeperPositionX = (_screenWidth - _goalKeeperWidth) / 2;
    _isBallMoving = false;
    _isBallHit = false;
    _scoreText = string.Format("{0} x {1}", _userScore, _computerScore);
    var scoreSize = _soccerFont.MeasureString(_scoreText);
    _scorePosition = (int)((_screenWidth - scoreSize.X) / 2.0);
    while (TouchPanel.IsGestureAvailable)
        TouchPanel.ReadGesture();
}

The _soccerFont.MeasureString measures the string using the selected font, and you will use that measurement to calculate the score position. The score will be drawn in the Draw method:

protected override void Draw(GameTime gameTime)
{
…
    // Draw the score
    _spriteBatch.DrawString(_soccerFont, _scoreText,
         new Vector2(_scorePosition, _screenHeight * 0.9f), Color.White);
    // End the sprite batch
    _spriteBatch.End();
    base.Draw(gameTime);
}

Turn On the Stadium Lights

As a final touch, the game turns on the stadium lights when the light level in the room is dim. The new Ultrabook and 2 in 1 devices usually have a light sensor that you can employ to determine how much light is in the room and change the way the background is drawn.

For desktop applications, you can use the Windows API Code Pack for Microsoft .NET Framework, a library from which you access features of the Windows 7 and later operating systems. However, for this game, let’s take another path: the WinRT Sensor APIs. Although written for Windows 8, these APIs are also available for desktop applications and can be used with no change. Using them, you can port your application to Windows 8 without changing a single line of code.

The Intel® Developer Zone (IDZ) has an article on how to use the WinRT APIs in a desktop application (see the “For More Information” section). Based on that information, you must select the project in the Solution Explorer, right-click it, and then click Unload Project. Then, right-click the project again, and click Edit project. In the first PropertyGroup, add a TargetPlatFormVersion tag:

<PropertyGroup><Configuration Condition="'$(Configuration)' == ''">Debug</Configuration>
…
  <FileAlignment>512</FileAlignmen><TargetPlatformVersion>8.0</TargetPlatformVersion></PropertyGroup>

Right-click the project again, and then click Reload Project. Visual Studio reloads the project. When you add a new reference to the project, you will be able to see the Windows tab in the Reference Manager, as shown in Figure 7.

Figure 7.The Windows* tab in Reference Manager

Add the Windows reference to the project. You will also need to add the System.Runtime.WindowsRuntime.dll reference. If you can’t find it in the list of assemblies, you can browse to the .Net Assemblies folder. On my machine, the path is in C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETCore\v4.5.

Now, you can write code to detect the light sensor:

LightSensor light = LightSensor.GetDefault();
if (light != null)
{

If there is a light sensor, the GetDefault method returns a not-null variable that you can use to detect light variations. Do that by wiring the ReadingChanged event, like this:

LightSensor light = LightSensor.GetDefault();
if (light != null)
{
    light.ReportInterval = 0;
    light.ReadingChanged += (s,e) => _lightsOn = e.Reading.IlluminanceInLux < 10;
}

If the reading is less than 10, the variable _lightsOn is True, and you can use it to draw the background in a different manner. If you see the Draw method of spriteBatch, you will see that the third parameter is a color. Up to this point, you have only used white. This color is used to tint the bitmap. If you use white, the colors in the bitmap remain unchanged; if you use black, the bitmap will be all black. Any other color tints the bitmap. You can use the color to turn on the lights, using a green color when the lights are off and the white color when they are on. In the Draw method, change the drawing of the background:

_spriteBatch.Draw(_backgroundTexture, rectangle, _lightsOn ? Color.White : Color.Green);

Now, when you run the program, you will see a dark green background when the lights are off and a light green background when the lights are on (Figure 8).

Figure 8.The complete game

You have now a complete game. It’s by no means finished—it still needs a lot of polish (animations when there is a goal, ball bounces when the goalkeeper catches the ball or the ball hits the posts)—but I leave that as homework for you. The final step is to port the game to Windows 8.

Port the Game to Windows 8

Porting a MonoGame game to other platforms is easy. You just need to create a new project in the solution of type MonoGame Windows Store Project, then delete the Game1.cs file and add the four .xnb files in the Content folder of the Windows Desktop app to the Content folder of the new project. You won’t add new copies of the files but instead add links to the original files. In the Solution Explorer, right-click the Content folder, click Add/Existing Files, select the four .xnb files in the Desktop project, click the down arrow next to the Add button, and select Add as link. Visual Studio adds the four links.

Then, add the Game1.cs file from the old project to the new one. Repeat the procedure you used with the .xnb files: right-click the project, click Add/Existing Files, select the Game1.cs file from the other project folder, click the down arrow next to the Add button, and then click Add as link. The last change to make is in Program.cs, where you must change the namespace for the Game1 class because you are using the Game1 class from the desktop project.

That’s it—you have created a game for Windows 8!

Conclusion

Developing games is a difficult task in its own right. You will have to remember your geometry, trigonometry, and physics classes and apply all those concepts to developing the game (wouldn’t it be nice if teachers used games when they taught these subjects?).

MonoGame makes this task a bit easier. You don’t have to deal with DirectX, you can use C# to develop your games, and you have full access to the hardware. Touch, sound, and sensors are available for your games. In addition, you can develop a game and port it with minor changes to Windows 8, Windows Phone, Mac OS X, iOS, or Android. That’s a real bonus when you want to develop multiplatform games.

For More Information

About the Author

Bruno Sonnino is a Microsoft Most Valuable Professional (MVP) located in Brazil  He is a developer, consultant, and author having written five Delphi books, published in Portuguese by Pearson Education Brazil and many articles for Brazilian and American magazines and web sites.

Developing Data Transfer Applications for Windows* 8 Using Intel® Common Connectivity Framework

$
0
0

Download PDF

The Intel® Common Connectivity Framework (Intel® CCF) is connectivity software for applications running on mobile devices. Applications using Intel CCF can connect users together whether they are across the world behind different firewalls or in the same room with no connection to the Internet. Intel CCF is available for iOS*, Android*, Windows* Desktop, and Windows Store apps, making applications with Intel CCF form factor- and platform-agnostic. Using Intel CCF, developers can produce apps that talk to phones, tablets, PCs, and other smart devices.

Intel CCF’s communication model is peer to peer. It enables people to connect directly with each other and share information between all their mobile computing devices.

In this article I will review how to develop applications with Intel CCF 3.0 for Windows 8 devices. I was the owner of a project to develop an app for transferring files between Windows 8 and Android devices, where I personally developed a Windows Store app. Here, I will share my experience of using Intel CCF.

First of all, you need to attach libs to the project in Microsoft Visual Studio*. The Intel CCF SDK contains two dll files: libMetroSTC and WinRTSTC that every Intel CCF application needs. To use the Intel CCF APIs, add the Intel.STC.winmd to the project references. The winmd file contains metadata for the Intel CCF SDK for Windows Store apps.

Identity Setup

Before a session can be made discoverable, the Intel CCF user must set the identity, which consists of user name, device name, and avatar. This is the identity that remote users will see. In the SDK for Windows Store apps, InviteAssist class allows users to set the Intel CCF identity.

                string displayName = await UserInformation.GetDisplayNameAsync();
                _inviteAssist = InviteAssist.GetInstance();
                _inviteAssist.SetUserName(displayName);
                _inviteAssist.SetStatusText("Status");
                _inviteAssist.SetSessionName("Win 8");
                if (_AvatarStorageFile == null)
                {
                    Windows.Storage.StorageFile imgfile = await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(new Uri("ms-appx:///Assets/Device.png"));
                    _AvatarStorageFile = imgfile;
                }
                await _inviteAssist.SetAvatarAsync(_AvatarStorageFile);

Implement the SetUserProfile() function to get an instance of InviteAssist class and set the profile by calling InviteAssist.SetUserName(), InviteAssist.SetSessionName(), and InviteAssist.SetAvatarAsync() APIs as shown above. I set the user name as the name of the Windows account and this designation will be visible on all devices that my Windows 8 device will connect to. Status text and session name may be defined by the user in UI. In my case these parameters can’t be changed by the user and will always use my notation. Now the profile is set and ready to be discovered by remote users.

Discovery

Intel CCF remote user discovery is done by calling Intel.STC.Api.STCSession.GetNeighborhood() API, which returns an IObservableVector<object> of all remote STCSessions that are discoverable. It’s the developer’s responsibility to either Data Bind the observable collection to the UI or to update the UI from the code behind to show the list of the users. I used Grid APP (XAML), which is the standard template in Visual Studio for rendering the GUI. All users are displayed in the results of GridList.

Create a ObservableCollection of NeighborhoodUsers class objects. _userList is a list of STCSession class objects and holds a list of neighborhood users.

private static List<STCSession> _userList;
	IObservableVector<object> _hood;
	ObservableCollection<NeighborhoodUsers> _neighborhoodList = new ObservableCollection<NeighborhoodUsers>();
        ObservableCollection<NeighborhoodUsers> neighborhoodList
        {
            get { return _neighborhoodList; }
        }

Get IObservableVector by calling STCSession.GetNeighborhood()API and set VectorChanged event handler for the IObservableVector.

	async void GetNeighborhoodList()
        	{
            		await Task.Run(() =>
          		  {
               	 	_hood = STCSession.GetNeighborhood();
                		_hood.VectorChanged += _hood_VectorChanged;
                		STCSession.LockNeighborhood();
                		IEnumerator<object> en = _hood.GetEnumerator();
                		while (en.MoveNext())
                		{
                    			STCSession session = en.Current as STCSession;
                    			if (session != null)
                        				hood_DiscoveryEvent(session, CollectionChange.ItemInserted);
                		}
                		STCSession.UnlockNeighborhood();
            		});
        	}

        	void _hood_VectorChanged(IObservableVector<object> sender, IVectorChangedEventArgs evt)
        	{
            		STCSession session = null;
            		lock (sender)
            		{
                		if (sender.Count > evt.Index)
                    			session = sender[(int)evt.Index] as STCSession;
            		}
            		if (session != null)
                		hood_DiscoveryEvent(session, evt.CollectionChange);
        	}

We add the hood_DiscoveryEvent() callback function to capture vector change events. This callback function notifies when remote sessions are available for connection or when they are no longer available in the neighborhood. When a new session is available, a CollectionChange.ItemInserted event is received and CollectionChange.ItemRemoved and CollectionChange.ItemChanged events are received when remote STCSession leaves the neighborhood or any STCSession parameter is changed, respectively. Add STCSession.ContainsGadget() to check to see if the same app is installed on the remote machine or not.

private async void hood_DiscoveryEvent(STCSession session, CollectionChange type)
        	{
            		switch (type)
            		{
                		case CollectionChange.ItemInserted:
                    			await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
                    {
                        			_userList.Add(session);
                        			AddPeopleToList(session, "Not Connected");
                    });

                    		break;

                		case CollectionChange.ItemRemoved:
			// Handle this case to check if remote users have left the neighborhood.
                    			if (_neighborhoodList.Count > 0)
                    			{
                        				NeighborhoodUsers obj;
                        				try
                        				{
                            				obj = _neighborhoodList.First((x => x.Name == session.User.Name));
                            				await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
                            {
                                	_neighborhoodList.Remove(obj);
                                	_userList.RemoveAll(x => x.Id.ToString() == session.Id.ToString());
                            });
                        				}
                        				catch
                        				{
                            				obj = null;
                        				}
                    			}
                   		 break;
                		case CollectionChange.ItemChanged:
			// Handle this case to check if any STCSession data is updated.
                   		 {
                        			STCSession item;
                        			try
                        			{
                            			item = _userList.First(x => x.Id == session.Id);
                        			}
                        			catch
                        			{
                            			item = null;
                        			}
                        			if (item != null)
                           			 item.Update(session);
                        			break;
                  		}
                		default:
                    		break;
            		}
       	 }

Invitation

Now that we know how to discover remote users (devices), it is high time to explain the process of sending a connection. In Intel CCF, this process is called invitation. In Intel CCF 3.0, sending and receiving invitations are handled by STCInitiator and STCResponder classes. STCInitiator is used to send the invitation to the remote user, and STCResponder is used to respond to the incoming request. When a request is accepted by the remote user, a successful Intel CCF connection is established. There are no restrictions on using the STCInitiator object to send invitations. An object can send multiple invitations.

Sending an invitation to a remote user is described in the following function.

private void InitializeInitiator(STCApplicationId appId)
        	{
            		initiator = new STCInitiator(appId, true);
            		initiator.InviteeResponded += initiator_InviteeResponded;
            		initiator.CommunicationStarted += initiator_CommunicationStarted;
            		initiator.Start();
        	}

After all callback handlers are set, call STCInitiator.Start() API. Now to send an invitation to the discovered remote user, it is necessary to call STCInitiator.Invite() API.

initiator.Invite(_userList[itemListView.Items.IndexOf(e.ClickedItem)].Id);

To check the status of a sent invitation, implement STCInitiator.InviteeResponded() callback.

async void initiator_InviteeResponded(STCSession session, InviteResponse response)
{
		await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
		{
  			switch(response)
			{
				case InviteResponse.ACCEPTED:
                			// You are connected to the user.
				break;
				case InviteResponse.REJECTED:
                			//Invite was rejected by the remote user.
				break;
		        		case InviteResponse.TIMEDOUT:
                			//No response. Invite time-out.
				break;
                		}
        		});
}

Now that the invitation was sent, the remote user needs to receive it. Implement a function called InitializeResponder() and initialize a STCResponder object by passing a STCApplicationId object. Register STCResponder.InviteeResponded() and STCResponder.CommunicationStarted() handlers. These handlers are called when remote users respond to invitations and when communication channels are successfully established between two Intel CCF users, respectively.

	private void InitializeResponder(STCApplicationId appId)
        	{
            		responder = new STCResponder(appId);
          		  responder.InviteReceived += responder_InviteReceived;
            		responder.CommunicationStarted += responder_CommunicationStarted;
            		responder.Start();
      	  }

Invitations sent by remote users can be received in STCResponder.InviteReceived() callback. When an invitation is received, it can be accepted or rejected. To respond to the invitation, STCResponder.RespondToInvite() API is called.

STCResponder.RespondToInvite() API is called.
	async void responder_InviteReceived(STCSession session, int inviteHandle)
        	{
            		if ((_initiatorDataStream == null) && (_responderDataStream == null))
            		{
                		try
                		{
                    			if (!checkPopUp)
                    			{
                        				_inviteHandle = inviteHandle;
                        				_session = session;
                        				Debug.WriteLine("Several windows " + _inviteHandle);
                        				checkPopUp = true;
                        				await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
                        			{
                            				InviteeName.Text = session.User.Name + " wants to connect";
                            				InviteePopup.IsOpen = true;
                            				checkPopUp = true;
                        			});
                		    	}
                		}
                		catch (Exception ex)
                		{
                    			Debug.WriteLine(ex.Message);
                		}
            		}
            		else
            		{
                		responder.RespondToInvite(session, inviteHandle, false);
            		}
        	}

Communication and Data Transfer

After sending and receiving invitations, initiator_CommunicationStarted() and responder_CommunicationStarted() callbacks give the Stream handle. We will use this NetStream handle to transfer data between two connected users. To get the data stream handle, implement initiator_CommunicationStarted() and responder_CommunicationStarted() callbacks and create a NetStream object. Callbacks for NetStream.StreamClosed and NetStream.StreamSuspended events can be registered. These events are received when a communication channel is closed or suspended, respectively.

void initiator_CommunicationStarted(CommunicationStartedEventArgs args)
        	{
            		_initiatorDataStream = args.Stream;

            		objWrapper.SetStream(_initiatorDataStream);

            		_initiatorDataStream.StreamClosed += DataStream_StreamClosed;
            		_initiatorDataStream.StreamSuspended += DataStream_StreamSuspended;
            		_initiatorDataStream.DataReady += objWrapper.StreamListen;
       	 }
        	void responder_CommunicationStarted(CommunicationStartedEventArgs args)
        	{
            		_responderDataStream = args.Stream;

            		objWrapper.SetStream(_responderDataStream);

            		_responderDataStream.StreamClosed += DataStream_StreamClosed;
            		_responderDataStream.StreamSuspended += DataStream_StreamSuspended;
            		_responderDataStream.DataReady += objWrapper.StreamListen;
      	  }

	private async void DataStream_StreamClosed(int streamId, Guid sessionGuid)
        	{
            		await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
            		{
               		 UpdateConnectionStatus(_session.User.Name, "Not Connected");
              			  if (_inviter)
               		 {
                   			 _initiatorDataStream.Dispose();
                    			_initiatorDataStream = null;
                    			_inviter = false;
                		}
                		else if (_responder)
                		{
                    			_responderDataStream.Dispose();
                    			_responderDataStream = null;
                    			_responder = false;
                		}
                		if (isTransferFrame)
               		{
                    			if (this.Frame != null && this.Frame.CanGoBack) this.Frame.GoBack();
                		}
                	ResetUIScreen();
            		});
        	}

Now, let’s look at the process of transferring data. First, we choose the file that we want to send. For this, I developed my own file explorer, but a simpler way to choose files is to use FileOpenPicker. When a file is chosen, write data to the NetStream handle received. To write data on the communication channel NetStrem.Write() is used.

async void SendFileData()

async void SendFileData()
       	{
           		uint size = 1024 * 4;
           		byte[] buffer = new byte[size];
           		int totalbytesread = 0;
           		using (Stream sourceStream = await storedfile.OpenStreamForReadAsync())
           		{
               		do
               		{
                   			int bytesread = await sourceStream.ReadAsync(buffer, 0, buffer.Length);
                   			_objNetStream.Write(buffer, (uint)bytesread);
                   			totalbytesread += bytesread;
                   			TransferedBytes = totalbytesread;

                   			if (args.nState != TransferState.FT_SEND_PROGRESS)
                   			{
                       				args.nState = TransferState.FT_SEND_PROGRESS;
                       				args.FileName = storedfile.Name;
                       				args.FileSize = (int)storedfileProperties.Size;
                       				args.DataBuffer = null;
                       				args.readbytes = 0;
                       				OnTransferUpdates(args);
                   			}
               		} while (totalbytesread < sourceStream.Length);
           		}
       	}

To receive a file, implement NetStream.Read()  event callback, which was explained in the Communication section.

readBytes = _objNetStream.Read(receivebuffer, (uint)bytesToRead);

In conclusion, I hope this information helps you to understand the Intel CCF SDK and that you will use this SDK for developing cool applications for Windows 8 and Android.

 

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Copyright © 2014 Intel Corporation. All rights reserved.

*Other names and brands may be claimed as the property of others.

A Basic Sample of OpenCL™ Host Code

$
0
0

Download PDF [686.3 kB]

Download Sample OCL ZIP [10.89 mB]

Contents

  1. Introduction
  2. About the Sample
  3. OpenCL Implementation.
  4. Limitations
  5. OpenCL Application Basics.
  6. Project Structure.
  7. OpenCL APIs Used.
  8. Controlling the Sample.
  9. References.

Introduction

Programmers new to OpenCL may find that the most complete documentation—the Khronos OpenCL specification—is not the best guide to getting started programming for OpenCL. The specification describes many options and alternatives, which can be confusing at first.   Other code samples written for OpenCL may focus on the device kernel code, or may use host code written with an OpenCL “wrapper” library that hides the details of how to directly use the standard OpenCL host API.  

The SampleOCL sample code described in this document aims to provide a clear and readable representation of the basic elements of a non-trivial OpenCL program. The focus of the sample code is the OpenCL™ code for the host (CPU), rather than kernel coding or performance.  It demonstrates the basics of constructing a fairly simple OpenCL application, using the OpenCL v1.2 specification.[1] Similarly, this document focuses on the structure of the host code and the OpenCL APIs used by that code.

About the Sample

This code sample uses the same OpenCL kernel as the ToneMapping sample (see reference below), previously published for the Intel® SDK for OpenCL Applications [2]. This simple kernel attempts to make visible features of an image that would otherwise be too dark or too bright to distinguish. It reads pixels from an input buffer, modifies them, and writes them out to the same position of an output buffer. For more information on how this kernel works, see the document High Dynamic Range Tone Mapping Post Processing Effect [3].

OpenCL Implementation

The SampleOCL sample application is not intended to "wrap" OpenCL; that is, it does not try to replace OpenCL APIs with a "higher level" API. Generally I have found that such wrappers are not much simpler or cleaner than using the OpenCL API directly and, while the original programmer of a wrapper may find the wrapper easier to work with, the wrapper will impose a burden on any OpenCL programmer called upon to maintain the code. The OpenCL APIs are a standard. To wrap them in a proprietary "improved" API is to throw away much of the value of having that standard.

With that said, the SampleOCL implementation does make use of a few C++ classes and associated methods to separate the use of OpenCL APIs into a few groups. The application is broken into two main classes to separate generic application elements from elements related to OpenCL. The former is C_SampleOCLApp; the latter is C_OCL.

Limitations

This sample code focuses only on the basics of an OpenCL application, as specified in version 1.2. It provides no insight into differences from other revisions, though most of the information should still be relevant for newer revisions.

The host side application code of this sample is not intended to demonstrate the most optimal performance. For simplicity, several obvious optimizations have been left out.

OpenCL Application Basics

What follows is a fairly complete explanation of a basic OpenCL application program sequence. The emphasis is on "basic," as many options are not covered. More information can be found in the OpenCL specification [1].

An OpenCL application should be able to execute with substantial parallelism on a variety of processing devices such as multi-core CPUs with SIMD instruction support and Graphics Processing Units (GPUs), either discrete or integrated into a CPU. As such, one of the first things an OpenCL application must do is determine what devices are available and select the device or devices that will be used. A single platform might support more than one type of device, such as a CPU that has an integrated GPU, and more than one platform may be available to the application.

Each platform available to the OpenCL application will have an associated name, vendor, etc. That information can be obtained using the OpenCL APIs clGetPlatformIDs() followed by clGetPlatformInfo() and can be used to select a desired platform.

Once a platform is selected, a context must be created to encompass the OpenCL devices, memory, and other resources needed by an application. With the selected platform ID and a specification of the desired device type (CPU, GPU, etc.), an application can call clCreateContextFromType() and then use clGetContextInfo() to obtain the device IDs. Or, it can directly request deviceIDs for a given platform ID and device type using clGetDeviceIDs() and then use clCreateContext() with those device IDs to create the context. This sample uses the latter approach to create a context with a single GPU device.

With the desired device ID(s) and context, one can create a command queue for each device to be used using clCreateCommandQueue(). The command queue is used to "enqueue" operations from the host application to the GPU or other device, for example, requesting that a particular OpenCL kernel be executed. This sample code creates a single command queue for a GPU device.

With that initialization work done, a common next step is to create one or more OpenCL program objects using clCreateProgramWithSource(). Once the program is created, it must still be built (essentially compiled and linked) using clBuildProgram(). That API allows setting options to the compiler, such as #defines to modify the program source.

Finally, with the program created and built, kernel objects that link to the functions in that program can be created, calling clCreateKernel() for each kernel function name.

Prior to executing an OpenCL kernel, set up the data to be processed-usually done by creating linear memory buffers using the clCreateBuffer() API function. (An Image is another OpenCL memory object type not used in this sample.) The clCreateBuffer function can allocate memory for a buffer of a given size and optionally copy data from host memory, or it can set up the buffer to directly use space already allocated by the host code. (The latter can avoid copying from host memory to the OpenCL buffer, which is a common performance optimization.)

Typically, a kernel will need at least one input and one output buffer as well as other arguments. The arguments need to be set up one at a time for the kernel to access at execution time by calling the clSetKernelArg() function for each argument. The function is called with a number indexing a particular argument in the kernel function argument list. The first argument is passed with index 0, the second with index 1, etc.

With the arguments set, call the function clEnqueueNDRangeKernel() with the kernel object and a command queue to request that the kernel be executed. Once the kernel is enqueued, the host code can do other things, or it can wait for the kernel (and everything previously enqueued) to finish by calling the clFinish() function. This sample calls clFinish(), as it includes code to time the total kernel execution (including any enqueue overhead) in a loop needs to wait for each execution to finish before recording the final or contribution to the average time.

That's the bulk of what goes into an OpenCL application. There are some clean-up operations, such as calling clReleaseKernel, clReleaseMemObject, clReleaseProgram, etc. These are included in the sample, even though OpenCL should automatically release all resources when the program exits. A more complex program might wish to release resources in a timely fashion to avoid memory leaks.

A final word of caution: while this sample does not use "events," they can be very useful for more complex applications that wish, for example, to overlap CPU and GPU processing. However, it is very important to note that any clEnqueueXXXXX() function (where "XXXXX" is replaced with the name for one of many possible functions) that is passed a pointer to an event will allocate an event, and the calling application code is then responsible for calling clReleaseEvent() with a pointer to that event at some point. If this is not done, the program will experience a memory leak as events accumulate.

A common mistake is to use the clCreateUserEvent() function to allocate an event to pass to any clEnqueueXXXX function, thinking that OpenCL will signal that event when it completes. OpenCL will not use that event, and the clEnqueueXXXX will return a new event, overwriting the contents of the event variable passed by pointer. This is an easy way to create a memory leak. User events have a different purpose, beyond the scope of this sample. For more details on OpenCL events, please see the OpenCL specification.[1]

Project Structure

_tmain ( argc, argv ) - Main entry point function in the Main.cpp file.

Creates an instance of class C_SampleOCLApp.

Calls C_SampleOCLApp::Run() to start application.

That's all it does! See the C_MainApp and C_SampleApp classes below for more details.

class C_MainApp - Generic OpenCL application super-class in the C_MainApp.h file.

On construction, creates instance of OpenCL class C_OCL.

 

Defines a generic application "run" function:

Run()

Run() is a good starting point for reading the code to understand how an OpenCL application is initialized, run, and cleaned up.

Run() calls virtual functions (see below) in a simple representative application sequence.

 

Declares virtual functions to be defined by C_SampleOCLApp (below):

AppParseArgs ()

Parse command line options

AppUsage ()

Print usage instructions

AppSetup ()

Application set up, including OpenCL set up

AppRun ()

Application specific operations

AppCleanup ()

Application clean up

 

class C_SampleOCLApp - Derived from C_MainApp, defines functions specific to this sample.

Implements application specific code for the C_MainApp virtual functions in the SampleApp.cpp and SampleApp.h files. (See class C_MainApp (above) for the virtual functions implemented.)

Defines "ToneMap" OpenCL kernel setup and execution functions in the ToneMap_OCL.cpp file:

RunOclToneMap ()

Does one-time set up for ToneMap, then calls ToneMap().

ToneMap ()

Sets ToneMapping kernel arguments and executes the kernel.

 

class C_OCL - Most of the host side OpenCL API set up and clean up code.

On construction, initializes OpenCL. On destruction, cleans up after OpenCL.

Defines OpenCL service functions in the C_OCL.cpp and C_OCL.h files:

Start ()

Sets up OpenCL device for Intel® Iris™ graphics for proper platform.

ReadAllPlatforms ()

Obtains all available OpenCL platforms, saving their names.

MatchPlatformName ()

Helper function, chooses a platform by name.

GetDeviceType ()

Helper function - is device type GPU or CPU?

CheckExtension ()

Checks if a particular OpenCL extension is supported on the current device.

ReadExtensions ()

Obtains a string listing all OpenCL extensions for the current device.

SetCurrentDeviceType ()

Sets desired device type and creates OpenCL context and command queue.

CreateProgramFromFile ()

Loads a file containing OpenCL kernels, creates an OpenCL program, and builds it.

ReadSourceFile ()

Reads OpenCL kernel source file into a string, ready to build as a program.

CreateKernelFromProgram ( )

Creates an OpenCL kernel from a previously built program.

GetDeviceInfo ()

Two helper functions to get device specific information: one allocates memory to receive and return results, the other returns results via a pointer to memory provided by the caller.

ClearAllPlatforms ()

Releases everything associated with a previously selected platform.

ClearAllPrograms ()

Releases all currently existing OpenCL programs.

ClearAllKernels ()

Releases all currently existing OpenCL kernels.

 

OpenCL APIs Used

clBuildProgram

clCreateBuffer

clCreateCommandQueue

clCreateContext

clCreateKernel

clCreateProgramWithSource

clEnqueueMapBuffer

clEnqueueNDRangeKernel

clEnqueueUnmapMemObject

clFinish

clGetDeviceIDs

clGetDeviceInfo

clGetPlatformIDs

clGetPlatformInfo

clReleaseCommandQueue

clReleaseContext

clReleaseDevice

clReleaseDevice

clReleaseKernel

clReleaseMemObject

clReleaseProgram

clSetKernelArg

Controlling the Sample

This sample is run from a Microsoft Windows* command line console. It supports the following command line and optional parameters:

ToneMapping.exe [ ? | --h ] [-c|-g] [-list] [-p "platformName] [-i "full image filename"]

? OR --h

Prints this help message

-c

Runs OpenCL on CPU

-g

Runs OpenCL on GPU - default

-list

Displays list of platform name strings

-p "platformName"

Supplies a platform name (in quotes if it has spaces) to check for and use.

-i "full image filename"

Supplies an image file name (in quotes if it has spaces) to process.

References

  1. OpenCL Specifications from Khronos.org:

    http://www.khronos.org/registry/cl/

  2. Intel® SDK for OpenCL™ Applications:http://software.intel.com/en-us/vcsource/tools/opencl-sdk
  3. High Dynamic Range Tone Mapping Post Processing Effect:

    http://software.intel.com/en-us/vcsource/samples/hdr-tone-mapping

 

Intel, the Intel logo, and Iris are trademarks of Intel Corporation in the U.S. and other countries.
* Other names and brands may be claimed as the property of others.
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission from Khronos.
Copyright © 2014, Intel Corporation. All rights reserved.

Adobe Photoshop* with Open Standards Enhanced by Intel® HD and Iris™ Graphics

$
0
0

Download PDF [1.13MB]

Introduction

In this article, we’ll explore the strides Adobe engineers have made over the last few years to enhance Photoshop using OpenGL* and OpenCL™ to increase hardware utilization. The Adobe team selected two features—Blur and Smart Sharpen—as the focus of its recent efforts because both provided less than optimal processing speed and quality. We will discuss the results on those features in this paper. 

Open Standards Languages, Adobe Photoshop, and Intel

OpenGL (Open Graphic Language) has been used for years to boost rendering performance and resource utilization across platforms. OpenCL (Open Computing Language) is a royalty-free open standard for portable general purpose parallel programming across central, graphics and other processors. The OpenCL standard is a complement to the existing OpenGL APIs that adds general computation routines to OpenGL’s use of graphics processors for rendering work. OpenCL gives developers a uniform programming environment to execute code on all processing resources within a given platform.

Adobe Photoshop is a leading graphics industry application used for graphics editing and manipulation. A heavy processing and memory resource user, Photoshop is a powerful application that requires the greatest performance possible from a computer. To aid in its graphics processing capabilities, Adobe has used open standards for many generations of Photoshop. It has now been updated to take advantage of OpenCL, which allows for an even higher level of performance. 

Intel provided testing for this report. Intel also makes available an array of tools and SDKs to accelerate development for visual computing. These include a Developer’s Guides for Intel® Processors (links to guides for each generation of Intel graphics processors on the page). The latest guide is the Graphics Developer's Guide for 4th Generation Intel® Core™ Processor Graphics – now includes OpenGL), the Intel® SDK for OpenCL Applications, and a web site dedicated to visual computing

For a powerful application, like Photoshop, using open standards like OpenGL and OpenCL can improve performance and allow the processing routines to be used across platforms and with other Adobe products more easily. 

Photoshop’s Use of OpenGL and OpenCL Standards

A few years ago, in Adobe’s Creative Suite* 4 release of Photoshop (Photoshop CS4), Adobe developers focused their OpenGL efforts on enhancing the Canvas and 3D interactions. They implemented Smooth Zoom, Panning, Canvas Rotate, Pixel Grid, and 3D Axis/Lights using the OpenGL API to improve the performance. Turn these features ON (in Preferences), enable “Use Graphics Processor, select the “Advanced Settings” button, select “Advanced Drawing Mode” from the dropdown menu, and enable the checkboxes for “Use Graphics Processor to Accelerate Computation” and “Use OpenCL.“ Refer to Figure 1 and Figure 2 for recommended settings in the Photoshop user interface.

Photoshop on IntelFigure 1: Graphics Settings in Photoshop* Preferences Dialog

Photoshop SettingsFigure 2: Select "Advanced" in the Drawing Mode Dropdown

With Photoshop CS5, the developers used OpenGL to speed up the user interface (UI) and to add the Pixel Bender plug-in. The specific UI features targeted were the Scrubby Zoom, HUD Color Picker, Color Sampling Ring, Repousse, and 3D Overlays. With these new features, the OpenGL modes were expanded to encompass basic, normal, and advanced methods.

Then in Photoshop CS6, the team enhanced content editing with standards-based features from both OpenGL and OpenCL. Developers added Adaptive Wide Angle, Liquify, Oil Paint, Puppet Warp, Lighting Effects, and 3D Enhancements using OpenGL. The OpenCL standard was used to add a Field/Iris Tilt/Shift function as well as the Blur function.

Today, with Adobe’s latest release of their Creative Suite, Creative Cloud enhances the Photoshop application even further with Smart Sharpen and the selectable modes of “Use Graphics Processor” and “Use OpenCL.”

Intel HD Graphics Devices that support OpenGL and OpenCL

The following Intel graphics devices should be enabled for OpenGL and OpenCL graphics acceleration by default:

  • 4th generation Intel® Core™ processors
    • Intel® HD Graphics 4200, 4400, 4600, P4600, P4700
    • HD Graphics 5000
    • Iris™ Graphics 5100
    • Iris™ Pro Graphics 5200
  • 3rd gen Intel® Core™ processors
    • Intel® HD Graphics 4000, P4000
  • 2nd gen Intel® Core™ processors
    • Intel® HD Graphics 3000, P3000

Developing Specific Photoshop Enhancements

How did the developers achieve these results? Photoshop uses a system of layers to apply many of its advanced features to an image. Figure 3 illustrates this concept by showing three layers of a very simple image as the WHITE space. The EFFECTS, the RED and BLUE layers in the example, are separate layers in the stack. Effects that can be applied in layers include Sharpen, Blur, and even Red-eye Removal. Effects can be applied to the final image without changing the original file. These layers can also be ordered, stacked, and combined to provide a blended effect as the right side of the figure shows, e.g., combining red and blue to get PURPLE. Additionally, there are special layers called “mask layers” that allow you to restrict an effect’s application to a select region of the image.

Photoshop on IntelFigure 3: Separate (Left) and Combined (Right) Layers

The image-combining aspect also applies to Photoshop textures. A Photoshop texture refers to the content of a layer that is then blended or overlaid with other layers to “texture” an image. Notice how the bricks from the image on the lower left below provide a texture to the cloak of the statue in the middle image when applied with a small percentage of opacity.

Photoshop on IntelFigure 4: Example of Using Texture in Photoshop*

Adobe used an OpenGL API to enhance the Photoshop texture/layer effect. In the OpenGL Advanced Mode, the OpenCL “– int format –GL_RGBA16F_ARB” call enables the Shader Tool to apply checkerboard compositing, tone mapping, and color matching.

Sharpen the Focus

Sports photographers use the Sharpen step extensively, as just a few increments on the controls can make a world of difference in the impact of an action shot. Figure 5 demonstrates how detail can be improved by applying a Sharpen step in photo editing. Notice the text, stars, and even brush stroke detail is a little more pronounced in the “After” image on the right.

Photoshop on IntelFigure 5: Original Image (Left) and After Sharpen (Right)

However, the Sharpen step can create some unwanted side effects. Details that are relatively sharp or insignificant in the original image can develop artifacts akin to boosting the image “noise,” producing a halo effect. For this release, Adobe renovated the legacy Smart Sharpen by introducing a patch-based “denoise and sharpen” algorithm implemented using the OpenCL standard. The new patch-based algorithm produces a sharpened image without any halo effect. Furthermore, the denoise step suppresses the “noise gets boosted when you sharpen” issue. Compare the images in Figure 6, Figure 7, and Figure 8 below. With this result, Adobe looks forward to using these standards to further improve all the sharpen tools.

Photoshop on IntelFigure 6: Original text image

Photoshop on IntelFigure 7: Image after applying legacy smart sharpen w/halo effect

Photoshop on IntelFigure 8: Image after applying patch-based smart sharpen

Bringing Blur into Focus

Another editing tool function that was improved by using OpenCL was the Blur tool. There are numerous ways to emphasize and de-emphasize a portion of an image. Many qualities can be influenced at the time the photograph is captured, but a photograph’s impact can be improved, or at least changed, with some post-processing. Red-eye removal and cropping are very common post-processing tasks, but image sharpness can also be improved. Image area-specific sharpness can have a large impact.

Photoshop on IntelFigure 9: Mona Lisa (Portrait of Lisa Gherardini, wife of Francesco del Giocondo) by Leonardo da Vinci

In his masterpiece Mona Lisa, Leonardo da Vinci (Figure 9) [8] emphasized his subject in the portrait by placing her image in the foreground with a somewhat out-of-focus rural landscape behind her. By blurring the background, he helped the viewer focus on his subject, which was the most important part of the painting, not the background. Following is an example of how blurring can improve a more modern image. Finding the photograph’s main theme can be difficult, so blurring helps refine the image’s theme. I took the sharpened image used in Figure 6 and further emphasized the central star in the image by applying a Blur tool, which results in the clarity of the star in the image on the right below (Figure 10). Blurring changed our perspective of the image so that the star is obviously the focus. Suffice it to say, there are lots of ways to blur an image (on purpose), and this is one of the newest ways.

Photoshop on Intel
Figure 10: Original Image (left) Sharpen (center), then Blur (right) added for emphasis

Adding Blur to an image is much like using a color crayon, except the mouse is the crayon and the color is the Blur feature. To apply the Blur, you select the Blur tool, size the tool “brush” (a cursor that can be sized from 1 screen pixel to the size of the entire image) to match the size of the image region you wish to blur, and then click-and-hold the mouse while “coloring or scrubbing” over the area of the image you wish to blur. The more coloring action performed, the more blur applied to the image region.

Intel Increased Graphics Performance

The exercise of adding an OpenCL Blur tool was somewhat challenging and provided a few good learning opportunities. The team wanted to balance the workload by utilizing all the possible resources on the host platform. Cross-platform support is critical including Windows* and Mac* OSs. These factors led them to OpenCL. The team ended up taking an existing blur tool in Photoshop and porting it from optimized CPU code to OpenCL kernels.

Adobe looked to reduce the complexity required of Blur that before OpenCL required multiple command queues running on multiple threads. They also experienced resource limitations, such as timeouts and out-of-memory failures, on lower-end video subsystems. Finally, platform variations, like driver stacks and the use of various compilers, would be reduced by going to the OpenCL-based solution.  OpenCL allowed them to reduce these challenges by making it possible to cache a portion of an image to local memory and break the images down to smaller 2k by 2k blocks for the graphics processor. These improvements resulted in higher reliability and a 4 to 8 times faster filter time by utilizing the GPU.

Intel’s testing shows performance gains on the following Photoshop actions, as the available execution units and memory bandwidth have increased over the generations of Intel HD graphics as shown in the chart below (Figure 11).[1]

Photoshop on Intel
Figure 11: Photoshop* with OpenGL* Performance over Generations of Intel HD Graphics

When tests are run with the OpenGL or OpenCL features enabled and disabled, we see the routines add significant performance improvement to both the Liquify Filter and the Field Blur tools in the graphs below (Figure 12). Liquify and Blur processing times normalized in seconds to 1 GPU acceleration off/on Intel® HD 4600.[1]

Photoshop on IntelFigure 12: Photoshop* tool performance with Standards On/Off

The effort was well worth it. The performance of this new Blur function when tested with the OpenCL hardware acceleration ON versus OFF had a 3x faster processing time depending on the workload and the size of the radius being blurred (Figure 13).[1]

Photoshop on Intel
Figure 13: Sample blur execution time (in seconds) compared

General application processing accounts for the majority of time in smaller workloads, so larger workloads show a better improvement in processing time. When OpenCL acceleration is enabled, both the CPU and the GPU are efficiently utilized, with many of the multithreaded app’s cores submitting work to the graphics processor. The graphics processing unit is utilized at a minimum 70% rate while memory utilization is 10%-36% depending on the graphics subsystem. Finally, there were no stalls in the graphics pipeline making for an improved user experience.

Summary

Adding standards-based processing routines has allowed Adobe to continue its tradition of enhancing Photoshop performance with each release. With the addition of OpenCL-based acceleration on an Intel HD Graphics device, the user experiences an improvement in performance and gains an ability to evaluate the blur filter almost real-time across the entire image. This complete image experience was not possible before OpenCL was added to these filters, and this change makes creating compelling images much more efficient. Prior to the addition of OpenCL, only a small fraction of the image could be previewed before applying the effect. Similarly, users can review their smart sharpening filter as they make adjustments full screen and get to the desired final image faster. Now with OpenCL, Photoshop is clearly better.


[1]Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance.

 

References and Resources

About the Authors

Tim Duncan is an Intel Engineer and is described by friends as “Mr. Gidget-Gadget.” Currently helping developers integrate technology into solutions, Tim has decades of industry experience, from chip manufacturing to systems integration. Find him on the Intel® Developer Zone as Tim Duncan (Intel)

Murali Madhanagopal is a member of the Intel Visual & Parallel Computing Group, where he is a Lead Graphics Architect. He received his M.S. in Computer Information Systems from Texas A&M University, College Station and has a bachelor’s degree in Computer Engineering from the College of Engineering Guindy, Anna University, India. Madhanagopal is responsible for developing and executing Intel’s workstation processor graphics strategy that enables ISV’s software to run efficiently on current and future processor graphics-based platforms. He is actively engaged in application and system optimization activities with industry-leading CAD, CAE, and Digital Content Creation ISVs and OEMs.

 

Intel, the Intel logo, and Iris are trademarks of Intel Corporation in the U.S. and/or other countries.
OpenCL and the OpenCL logo are trademarks of Apple Inc and are used by permission by Khronos.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Texture Sharing from Intel Media SDK to OpenGL

$
0
0

Code Sample

Executive Summary

On Windows* OS, Direct3D is usually used for video processing. However, there are still many applications using OpenGL* for its cross-platform capability in order to maintain the same GUI and look and feel from platform to platform.  Recent Intel graphics drivers support NV_DX_interop to enable D3D to OpenGL surface sharing which can be used in conjunction with Intel® Media SDK. Intel® Media SDK can be configured to use Direct3D and with the introduction of NV_DX_interop, Intel Media SDK's frame buffer can be used by OpenGL without expensive texture copying from GPU to CPU back to GPU. This sample code and white paper demonstrates how to setup Intel® Media SDK to use D3D for encoding and decoding, do the color conversion from NV12 color space (Media SDK's natural color format) to RGBA space (OpenGL's natural color format,) followed by mapping the D3D surface to OpenGL texture. This pipeline completely bypasses copying the textures from GPU to CPU which used to be one of the biggest bottleneck using OpenGL with Intel® Media SDK.

System Requirements

The sample code is written using Visual Studio* 2013 with the multi-purposes of (1) demonstrating Miracast and (2) Intel® Media SDK / OpenGL texture sharing utilizing Intel® Media SDK of which the decoded surfaces are shared with OpenGL textures with 0 copying, making it very efficient. MJPEG decoder is HW accelerated for Haswell and later processors and software decoder is automatically used within Media SDK for the earlier processors. In any case, it requires MJPEG capable camera (either onboard or USB webcam.)
Most of the techniques used in the sample code and white paper should be applicable to Visual Studio 2012 with an exception of identifying Miracast connection type. The sample code is based on Intel® Media SDK 2014 for Client and can be downloaded from the following link (https://software.intel.com/sites/default/files/MediaSDK2014Clients.zip.) Upon installing the SDK, a set of environment variables will be created for the Visual Studio to find the correct paths for the header files and libraries.

Application Overview

The application takes the camera as an MJPEG input and goes through a pipeline to decode MJPEG video, encode the stream to H264, followed by H264 decoder. The MJPEG camera stream (after decoding) and final H264 decoded streams are displayed to MFC based GUI. On Haswell systems, 2 decoders and 1 encoder (1080P resolution) runs sequentially for the readability, but they are quite fast due to HW acceleration and the camera speed is the only limit for fps. In a real world scenarios, the encoders and decoders should run in separate threads and the performance shouldn't be a problem.

On a single monitor configuration, camera feed is displayed in PIP on top of H264 decoded video in the OpenGL based GUI (Figure 1.) When Miracast is connected, the software automatically identifies the Miracast connected monitor and displays a full screen window to fill the H264 decoded video, while the main GUI displays the raw camera video – so that original vs. encoded video can clearly show the difference. Finally, View->Monitor Topology menu can not only detect the current topology of the monitors, it can also change the topology. Unfortunately, it cannot initiate Miracast connection. It can be done only by OS charm menu (slide in from the right -> Devices -> Project) and there is no known API to make a Miracast connection. Interestingly, you can disconnect Miracast by setting the monitor topology to internal only. If multiple monitors are connected by wires, the menu can change the topology any time.

Figure 1. Single Monitor Topology. MJPEG camera is shown in the lower right corner. H264 encoded video fills up the GUI. When multi-monitor is enabled such as Miracast, the software detects the change and MJPEG camera and H264 encoded video are separated to each monitor automatically.

Main Entry Point for the Pipeline Setup

The sample code is MFC based and the main entry point for setting up the pipeline is CChildView::OnCreate (), initializing the camera, MJPEG to H264 transcoder, and H264 decoder followed by binding the textures from transcoder and decoder to OpenGL renderer. Transcoder is just a subclass of the decoder adding the encoder on the top of the base decoder. Finally, OnCreate starts a thread that keeps pumping up the camera feed which is serialized. Upon reading the camera feed in the worker thread, it sends the message to OnCamRead function which decodes MJPEG, encodes to H264, decodes H264 and updates the textures to OpenGL renderer.  At the top level, the whole pipeline is very clean and simple to follow.

Initializing Decoder / Transcoder

Both decoder and transcoder is initialized to use D3D9Ex.  Intel® Media SDK can be configured to use software, D3D9, or D3D11. In this sample, D3D9 is used for the ease of color conversion. Intel® Media SDK's natural color format is NV12 and either IDirect3DDevice9::StretchRect or IDirectXVideoProcessor::VideoProcessBlt can be used to convert the color space to RGBA. For simplicity, this white paper is using StretchRect, but VideoProcessBlt is generally recommended because it has nice additional capability for post processing. Unfortunately, D3D11 doesn't support StretchRect and color conversion can be convoluted. Also, the decoder and transcoder in this paper uses separate D3D device for various experiments such as mixing software and hardware, but D3D device can be shared between the two to conserve the memory. Once the pipeline is setup this way, the output of the decoding is set to (mfxFrameSurface1 *) type. This is simply a wrapper for D3D9 and mfxFrameSurface1-> Data.MemId can be casted to (IDirect3DSurface9 *) and subsequently used by StretchRect or VideoProcessBlt in CDecodeD3d9::ColorConvert function after the decoding. Media SDK's output surface is non-sharable, but a color conversion is necessary anyways to be used by OpenGL, and a sharable surface is created to store the result of color conversion.

Initializing Transcoder

The result of the transcoder's decode will be directly fed into the encoder and ensure that MFX_MEMTYPE_FROM_DECODE is used when allocating the surface.

Binding Textures between D3D and OpenGL

The code to bind the texture is in CRenderOpenGL::BindTexture function. Ensure that WGLEW_NV_DX_interop is defined, then use wglDxOpenDeviceNV, wglDXSetResourceShareHandleNV, followed by wglDXRegisterObjectNV.  This will bind the D3D surface to OpenGL texture. It doesn't automatically update the textures however and calling wglDXLockObjectsNV / wglDXUnlockObjectsNV will update the texture (CRenderOpenGL::UpdateCamTexture and CRenderOpenGL::UpdateDecoderTexture.) Once the texture is updated, you can use it like any textures in OpenGL.

Things to Consider for Multi-Monitor Topology Change

In theory, it may seem simple enough to put up another window to an external monitor and control it based on the topology change detection. In reality however, it can take a while for the OS to initiate the switch until the monitor configuration is completed and the content is shown. When combined with encoder / decoder / D3D / OpenGL and all the nice things that come with it, it can be quite complicated to debug it. The sample tries to re-use most of the pipeline during the switch, but it may be actually easier to close down the whole pipeline and re-initiate it because a lot of things can go wrong when adding a monitor can take more than 10 seconds – even for HDMI or VGA connection.

Future Work

The sample code for this white paper is written for D3D9 and doesn't include D3D11 implementation. It's not clear what the most efficient way is to convert the color space from NV12 to RGBA in the absence of StretchRect or VideoProcessBlt. The paper / sample code will be updated when D3D11 implementation is ironed out.

Contributions

Thanks to Petter Larsson, Michel Jeronimo, Thomas Eaton, and Piotr Bialecki for their contributions to this paper.

 

Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and other countries.
*Other names and brands may be claimed as the property of others
Copyright© 2013 Intel Corporation. All rights reserved.


WaveSpy Pro 2014 by Wave Corporate Now Optimized for Intel® Atom™ Tablets for Windows* 8.1 Platform

$
0
0

Providing capabilities to monitor activity on personal or business computers from a tablet device.

(PRWEB) May XX, 2014 – Tablet users can now conveniently monitor activity on personal or business computers with WaveSpy Pro 2014, by security developer Wave Corporate. Now optimized for the Intel® Atom™ tablets for Windows* 8.1 platform, the app allows the user to monitor key strokes, Web activity, video downloads and more.

Applicable to both business and personal computing environments, WaveSpy Pro 2014 provides the user with detailed information including Internet browsing, social networking, Skype*, videos, pictures, Web camera, email contents and more. It even features the ability to monitor keystrokes to see exactly what a user types on a computer.

An expert security tool, WaveSpy Pro 2014 also uses the camera on the computer or tablet in question to track user behavior and patterns. The app then compiles this data into detailed reports that provide insights such as the amount of time a user spent browsing the Internet and the timeframe in which this occurred. In a business environment this provides an additional level of asset usage tracking. In a home or personal environment parents can monitor their children’s activities and block out inappropriate emails or advertising.

The developers at Wave Corporate were offered support and access to Intel development tools, information through their relationship with the Intel® Developer Zone. The company has now fully optimized WaveSpy Pro 2014 for the powerful capabilities of Intel Atom tablets for Windows* 8.1.

“WaveSpy Pro 2014 helps parents and business owners supervise what is being accessed on their personal or business computers and mobile devices,” says Vagner Costa of Wave Corporate. “Thanks to the portability of the Intel Atom tablets for Windows* 8.1, users can now conveniently monitor multiple devices, from anywhere.”

WaveSpy Pro 2014 is now available to download at: http://www.wavecorporate.com.br/programa-espiao-wavespy-pro

About Wave Corporate

Founded in 1998, Wave Corporate is a pioneer in the development of software in the security segment and monitoring information. For more information visit: http://www.wavecorporate.com.br

About the Intel® Developer Zone

The Intel Developer Zone supports independent developers and software companies of all sizes and skill levels with technical communities, go-to-market resources and business opportunities. To learn more about becoming an Intel Software Partner, join the Intel Developer Zone. To learn more, visit: https://software.intel.com/

Optimizing Battery Life on SOC Devices

$
0
0

When: June 11th, 2014 
Where:Taipei at the Ben Asia Conference Centre

Register Now
SessionDuration
Introduction to Power Analysis and Impact of Software30 minutes
Power Deep Dive45 minutes
Networking Break15 minutes
Tools and Step-by-Step Idle Power Analysis Methodology90 minutes
Lunch60 minutes
Fine Grain Power Optimization120 minutes
Break15 minutes
Hands On Lab60 minutes

Intel® is offering a  free full-day technical deep-dive session focusing on optimizing power for Windows* based Intel mobile platforms, led by Intel power  engineers. Course is designed for software, system and validation engineers.

Introduction to Power Analysis and Impact of Software

Get an introduction to platform power profile showing component level distribution of total system power, basics of power optimization including background on SoC/processor/device states, wakeup analysis etc. Results from characterization of studies done on Intel® Atom™- and Intel® Core™-based platforms to evaluated software power impact running on Windows*, Android* and Chrome platforms.

Power Deep Dive

Learn about Windows and Android* power features including: idle resiliency, Connected Standby and power friendly applications. Session will include evaluation of API efficiencies both on Windows and Android platform (wakelocks etc.).

Tools and Step-by-Step Idle Power Analysis Methodology

Discover how to reduce system power consumption under idle and active conditions and conduct fine grain power tuning for applications up to mW level. See how to analyze the application traces and identify opportunities for fine grain power optimization on Intel® platforms. Several case studies on Media application, Video conferencing, ModernUI, Casual games, Intel® Perceptual computing  and browsing apps will show fine grain power optimization techniques running on Android® and Windows using the power analysis & profiling tools.

Fine Grain Power Optimization

Discover how to reduce system power consumption for semi-active and active workloadsand conduct fine grain power tuning for applications up to mW level. See how to analyze the application traces and identify opportunities for fine grain power optimization on Intel® platforms. Case studies on Media application, Video conferencing, ModernUI, Casual games and browsing apps will show fine grain power optimization techniques.

Hands On Lab

Bring your own device with ~1GB of empty space (Core based with 3th or 4th Generation Processor) and learn how to debug your platform for Power using Intel Tools. Session will show quick way of validation your device for battery life.

*Other names and brands may be claimed as the property of others.

 

Fast ISPC Texture Compressor - Update

$
0
0

This article and the attached sample code project were written by Marc Fauconneau Dufresne at Intel Corp.

This sample demonstrates a state of the art BC7 (DX11) Texture compressor. BC7 partitioning decisions are narrowed down in multiple stages. Final candidates are optimized using iterative endpoint refinement. All BC7 modes are supported. SIMD instruction sets are exploited using the Intel SPMD Compiler. Various quality/performance trade-offs are offered.

 

fast ISPC texture Compressor

 
11-05-2013
11-05-2013
Graphics Tech Samples
 
 
 
http://software.intel.com/sites/default/files/article/487955/ispc-texture-compressor.jpg
This sample demonstrates a state of the art BC7 (DX11) Texture compressor. BC7 partitioning decisions are narrowed down in multiple stages.

License manager v2.3 required for Composer XE 2013 SP1 Update 3 floating licenses

$
0
0

When installing afloating license of the

     Intel® C++ Composer XE 2013 2013 SP1 Update 3
     Intel® (Visual) Fortran Composer XE 2013 2013 SP1 Update 3

you may see the error

     'No Valid license was found'.

This happens when you are running older Intel® Software License Manager versions (<v2.3).

 

Solution: Install the Intel Software License Manger v2.3 or higher. The newest license manager is available from here.
Quick Workaround (if you are not able to update the license manger quickly): Copy the license file used by the license manger to the local license directory of the client machines.

The default license directories are:

    Windows*:
          %commonprogramfiles(x86)%\Intel\Licenses\ (on 64-bit Windows OS)
          %commonprogramfiles%\Intel\Licenses\ (on 32-bit Windows OS)
    Linux*:
         /opt/intel/licenses
    OS X*:
         /Users/Shared/Library/Application Support/Intel/Licenses/
 

How to identify the license manager version?
Run the command lmgrd -v from the license manger directory, for example,

    Windows 64-bit:
        "%commonprogramfiles(x86)%\Intel\LicenseServer\"lmgrd -v
    Windows 32-bit:
        "%commonprogramfiles%\Intel\LicenseServer\"lmgrd -v
    Linux / OS X:
        /<installpath>/flexlm/lmgrd -v
        where <installpath> is the path where you installed the license Manager.

 

IGZIP: A high-performance Deflate Compressor, with Optimizations for Genomic Data

$
0
0

IGZIP is a fast compression library supporting the DEFLATE and GZIP formats. It is much faster than the standard “gzip -1”, although its compression ratio is slightly worse. It can be built in two different variants that trade-off compression ratio for speed. It can be built under Linux* or Windows*.

This version includes optional optimizations that improve the compression ratio of genomic SAM and BAM files to near “gzip -1” levels, while still maintaining a significant speed advantage.

 

*Other names and brands may be claimed as the property of others.

One Touch Composer: Creating a Digital Sheet Music App with Touch, Stylus, and Keyboard Control on Microsoft Windows* 8 Tablets

$
0
0

By Dominic Milano

Download PDF

Introduction

When Dmitriy Golonanov, the CEO of the international company, Maestro Music Software, learned that many musicians would rather use a lightweight tablet than carry a laptop to gigs or a classroom, he decided to enlist a small team of developers, which included Sergey Samokhin, to create music notation software that would run on Microsoft Windows* 8 tablets or Ultrabook™ 2 in 1s running Windows 8.1. The result of this inspiration is One Touch Composer for Microsoft Word*, which gives composers, music educators, and musicians of all levels the ability to create and share great-looking sheet music (Figure 1). Originally Submitted under the name One Touch Notation, One Touch Composer was the winning entry in the Tablets category of the Intel App Innovation Contest 2013 (AIC 2013). During the contest Dmitriy used resources from the Intel®Developer Zone.
OneTouch Figure 1
Figure 1: One Touch Composer for Microsoft Word* leverages the Lenovo ThinkPad* Tablet 2’s multi-touch
screen and supports input from touch, stylus, and keyboards.

To accomplish their goal, Golonanov’s team faced a number of challenges, including having to design a user experience that was both intuitive and practical for professional and amateur musicians, music teachers, and students. Music notation, after all, consists of a complex collection of specialized 2D symbols, and even a relatively simple song can span many pages. Displaying and interacting with that complexity on a small tablet screen challenged the team on a variety of levels, from programming a UI that could handle input from touch, stylus, and two kinds of virtual keyboards to making efficient use of computing resources.

Approach

To meet the six-week development cycle that the AIC 2013 rules specified, Golonanov and his team knew their app wouldn’t be able to meet the needs of all users. So rather than create a program that combined music notation with extensive audio playback capabilities via MIDI* and sound fonts—an approach that’s common in desktop PC music-notation programs—they restricted audio playback to built-in sounds and focused on giving users the ability to easily create and share digital sheet music.

In doing so, Golonanov and his team encountered their biggest challenge—changing their design philosophy from giving users as many powerful tools as they could pack into a desktop app to providing a simplified, intuitive touch-driven experience for a mobile device.

Golonanov and Samokhin worked with Alexey Bagenov, a music notation expert and conductor of the National Wind Orchestra of the Ukraine, who literally drew the UI on a piece of graph paper during a Microsoft BizSpark* UX Tour workshop (Figure 2).
OneTouch Figure 2
Figure 2:Alexey Bagenov’s sketch of One Touch Composer's UI positioned key functions around the perimeter of a touch screen.

Bagenov’s solution provided access to all the app’s functions from a single screen, without traditional menus. Instead, contextual submenus provide access to features such as selecting clefs or entering text.
OneTouch Figure 3
Figure 3: Most common UI functions are positioned along the sides and top of the screen for easy access.

The UI positions the most commonly accessed functions on the left and right sides as well as along the top of the screen (Figure 3). Notes can be input from a virtual touch-screen music keyboard positioned across the bottom, and if the device has a physical keyboard, it can be used to name files and enter text (Figure 4). If no keyboard is present (Figure 5), the user can select a virtual touch-screen keyboard. If a MIDI keyboard is attached to the device over a USB, the program uses a timed function to determine whether a keyboard is present and automatically connects to it.
OneTouch Figure 4
Figure 4: The combination of a physical keyboard, stylus, and touch input help give users the ability to interact intuitively with the software.

The Windows 8 Touch API provided all the necessary functions for implementing multiple input technologies. During early usability tests, the team learned that Microsoft’s recommended minimum of 23x23 pixels for touchable areas was too small for the buttons they wanted to use. Instead, they set 40x40 as the default size for buttons and gave users the ability to change the defaults and customize the interface. Stylus input, the team learned, proved very helpful for handling small UI elements such as staff lines, notes, and dynamics symbols. Finger-based touch interaction is best for simple commands such as scaling, zooming, and scrolling. To avoid unresponsive behavior caused by gaps between touch-targets, they avoided gaps larger than 5 pixels. The team was also careful to make UI elements positioned around the edge of the touch screen large enough to tap easily with a fingertip.

OneTouch Figure 5
Figure 5: If no keyboard is detected, a virtual keyboard can be used to input text (left). The multi-touch screen allows users to interact with the software using touch and a stylus at the same time (right).

Migrating Legacy Code

Some aspects of One Touch Composer were written in Delphi*. Other aspects, such as performance-critical components, were written using Assembler—the inline assembler for writing x86 machine code directly within Delphi programs. Assembler supports Intel® MMX™ technology and Intel® Streaming SIMD Extensions. Golonanov’s team borrowed heavily from existing resources, which played a large role in being able to create the app in the allotted six weeks. For example, One Touch Composer makes use of Maestro Music Software’s extensive, proprietary vector graphics music notation library, which the app accesses using the Delphi GDI+ API. This high-level, easy-to-use API is modeled after the .NET system. Drawing namespace and provides the ability to both display and print 2D graphics and formatted text.

To draw musical symbols on-screen, Golonanov and Samokhin used the DrawString method, shown here:

procedure TMSWindow.WMGesture(var Msg: TMessage); var
  gi: TGestureInfo;
  bResult,bHandled: BOOL;
  dwError: DWORD;
  L: Integer;
  ptZoomCenterX, ptZoomCenterY, K, angle: Single;
  aptSecond: TPoint;
  r: TRect;
begin
  bHandled := False;
  if bTouchGestureAPIPresent then
  begin
... test coord of symbols

When a user touches a UI button, which in this example is a Bass Clef button, the following code is used:

procedure TFProgramModule.OnButtonClick(Sender: TObject); begin
  if FMode = mdSetting then
  begin
    if Sender is TControl then
      SetActiveButton(TECItem(TControl(Sender).VCLComObject));
  end else if TECItem(TControl(Sender).VCLComObject) is TECButton then
    with TECButton(TControl(Sender).VCLComObject) do
      case Action of
        CLEF_ACTION: AClefExecute(Self); ...

The app also drew on two existing Maestro Music Software products, Maestro Book* and Maestro Book Online*, to provide the ability to publish, print, and share ebook versions of the music created in One Touch Composer.

Efficiency and Multi-Threading

GDI+ is by no means the fastest way to display 2D graphics on a screen, but it helped ensure that One Touch Composer would run on what Golonanov calls “classic computers” (older x86 systems).

The developers also took care to ensure that One Touch Composer would run on a minimum of 1 GB RAM. The AIC development system, a Lenovo ThinkPad* Tablet 2 with an Intel® Atom™ processor inside, featured 2 GB RAM, which was more than enough to give users the ability to handle musical scores longer than 20 pages.

One Touch Composer’s code was parallelized and multi-threading was used to enable simultaneous play back and printing of digital sheet music (for example), while a user reads RSS news feeds and so on.

Lessons Learned

By far the biggest challenge in creating One Touch Composer for Word on Windows 8 tablets was simplifying a very complicated application. Golonanov stressed the importance of considering the needs of the audience—in his case, musicians, composers, and educators—in determining the core features of a mobile app.

Golonanov also advises developers to embrace all available input technologies—in this case, touch, stylus, and keyboards—utilizing each for their best-suited functions. He also suggests that developers avoid locking their users into defaults that may not suit the size of their fingers or be the preferred way of interacting with a device or software.

Golonanov’s team continually analyzes the needs of musicians, eliciting and listening to their feedback to improve their applications. Golonanov also continues to participate in the Habrahabra developer community to get ideas from other professionals and to keep pace with platform developments.

Community Help

According to Golonanov, the Habrahabra developer community, one of four international developer communities that participated in AIC 2013, helped his team refine One Touch Composer by providing guidance in the form of written articles about developing software for musicians. The Intel Developer Zone forums were another invaluable resource in providing expert insights.

About the Developer

Dmitriy Golonanov’s interest in music led him to develop music notation software. In 2003, he started at MagicScore Music and in 2013 became CEO of Maestro Music Software.

Helpful Resources

Intel Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android*, Intel® RealSense™ technology and Windows* to download tools; access dev kits; share ideas with like-minded developers; and participate in hackathons, contests, roadshows, and local events.

Related Articles

Intel, the Intel logo, Intel Atom, Intel Core, Intel RealSense, and MMX are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Enter the Wormhole

$
0
0

Download as PDF

Insights from Developing Wormhole Pinball for the All-in-One on Windows*

By Geoff Arnold

Multi-touch All-in-One (AIO) platforms extend the bounds of what’s possible for those writing touch screen apps. Because of the limited screen size of most touch-screen phones and tablets, it’s generally practical for only one person to be tapping or swiping on the device at a time. However, with the larger multi-touch AIOs, whose screens often stretch more than two feet across, it’s possible for several people to gather around and tap on the screen simultaneously, particularly when the AIO is laid flat on a table.

Of course, touch screen apps built for bigger screens and multiple users have their own programming challenges. After participating in an earlier Intel contest, developer Dave Gannon took on these challenges, coming back with a vengeance with Wormhole Pinball, the winning app in the games category in the Intel App Innovation Contest 2013 in partnership with the Intel® Developer Zone.

Gannon has loved computers ever since his parents brought home a Commodore* 64 when he was a kid. He got his programming start typing in the BASIC commands he found in the C-64 manual and then seeing what happened on the start-up screen.

Today, when he's not in the office, Gannon does some fairly serious dabbling in the world of computer games. What started as a hobby making short demoscene clips has morphed into writing full-fledged games. Describing his intent for the game on CodeProject last summer, Gannon wrote “[w]hat I want to do is use the AIO to expand on what it means to have a pinball game.” The CodeProject community judged Gannon’s pinball app submission to be the grand prize winner in the games category, a distinction that netted him USD 10,000.  

The Appeal of AIO Platforms

Gannon developed Wormhole Pinball for the Lenovo IdeaCentre* Horizon 27, a tabletop PC built for casual gaming. Gannon, who has fond memories of huddling around coffee-table arcade machines with his friends, always wanted one in his house, "When I saw the tabletop All-in-One, I had to do a game for that,” he said.

Figure 1: Wormhole Pinball screenshot. Gannon wanted his entry to recreate the experience of coffee-table-type arcade games.

Gannon wanted to make the most of the Horizon's “tableness” in a game. Pinball was the first idea that sprang to mind. One consideration was how to render the graphics. 3D balls and flippers conceivably could look great on a larger screen, though Gannon was cautious after his experience in last year's contest. He had spent the lion's share of his time trying to incorporate 3D in a feature that would follow the gamer's face with a webcam, leaving hardly any time for the rest of the game development.

This time around Gannon stuck with 2D graphics and spent the time he saved on interesting features to make the game fun for players. The most obvious example is the wormhole, which players can draw on the screen and use to teleport the balls, a nifty feature obviously not possible in the old-fashioned arcade version of the game. "I needed something to make it a little bit crazy and a little bit unpredictable," said Gannon. Other features include adaptive multi-dimensional gravity that changes as the number of players grows. Gannon's app supports up to four players, which means it can make sense of simultaneous gestures, among the most distinctive features of the AIO platform.      

Building the Game Elements

Gannon used the open source Farseer* Physics Engine to create the core elements of his game, including the flippers, a pair of which appears in each corner of the screen. These were a logical starting point for Gannon, a self-declared hobbyist. “I just sort of do it,” he said, when asked to summarize his strategy for planning and building his app. “You'd start with the main thing that you would need, which is a ball and some flippers (Figure 2), and go from there.” 

Figure 2: Gannon constructed the complex shape of his flippers using the simple primitives available in the Farseer* Physics API.

To build the shapes that combined to make the flipper, the hard-coded trapezium was first plotted on graph paper as Gannon sought to understand the nuances of Farseer. “Any polygon needs to be triangulated before it can be used in Farseer,” he said. “You must also make sure your polygon is convex as triangulation of concave polygons using the built-in functions is a bit ropey.”

Farseer, primarily a collision-detection system, creates its own mass for a given object based on the density and area of the shapes that were used to construct it, a default setting Gannon found troublesome. “Most of the time you probably don’t want this, so remember to set the mass explicitly at the end or it will be overwritten,” he said. Figure 3 is the code Gannon used to accomplish this.

     private void CreateFlipper()
        {
            var fixtures = this.Body.FixtureList;
            fixtures.ForEach(i => this.Body.DestroyFixture(i));
            fixtures = null;

            const float largeCircleRadius = 42.8f;
            const float smallCircleRadius = 21.4f;
            var largeCircle = new CircleShape(FarseerUnitConverter.ToPhysicsUnits(largeCircleRadius) * this.Scale, 1.0f);
            var smallCircle = new CircleShape(FarseerUnitConverter.ToPhysicsUnits(smallCircleRadius) * this.Scale, 1.0f);

            smallCircle.Position = FarseerUnitConverter.ToPhysicsUnits(new Vector2(235.4f * this.Scale, 0.0f));
            List<Vector2> points = new Vector2[] {
                FarseerUnitConverter.ToPhysicsUnits(new Vector2(0.0f, -largeCircleRadius * this.Scale)),
                FarseerUnitConverter.ToPhysicsUnits(new Vector2(235.4f * this.Scale, -21.8f * this.Scale)),
                FarseerUnitConverter.ToPhysicsUnits(new Vector2(235.4f * this.Scale, 21.8f * this.Scale)),
                FarseerUnitConverter.ToPhysicsUnits(new Vector2(0.0f, largeCircleRadius * this.Scale)) }.ToList();
            var trapezium = new Vertices(points);

            var triangulatedTrapezium = FarseerPhysics.Common.Decomposition.Triangulate.ConvexPartition(trapezium, TriangulationAlgorithm.Bayazit);
            var trapeziumShape = new PolygonShape(triangulatedTrapezium[0], 1.0f);
            this.Body.CreateFixture(largeCircle);
            this.Body.CreateFixture(smallCircle);
            this.Body.CreateFixture(trapeziumShape);
            this.Body.BodyType = BodyType.Dynamic;
            this.Body.Mass = this.mass;
            this.Body.AngularDamping = 0.1f;
        }

Figure 3: The code that creates the game’s flippers out of simple shapes—circles and trapeziums.

 

The geometry created in Farseer is not actually rendered. Instead, sprites are rendered with the same shape. “This makes the code needed to draw the graphics much simpler as no geometry has to be rendered, yet you can achieve really nice-looking graphics,” said Gannon. Figure 4 is a sprite for the flipper that’s drawn at the same position and angle given by Farseer geometry in Gannon’s code.

Figure 4: A sprite for the flipper drawn at the same position and angle as the Farseer* geometry shown in the code sample given in Figure 3.

Touch Recognition Challenges and Solutions

Unsurprisingly, one of Gannon’s biggest challenges was how to make touch recognition work in a typical game, where up to four people might be huddled around the screen, tapping and swiping on it at the same time. Touch points within a certain radius are deemed part of the same touch event. Easy enough. But what about determining which way—clockwise or counterclockwise—a player drew a circle to create a wormhole? This is a key feature of the game, since the wormholes (Figure 5) behave differently depending on which way they're drawn.

Figure 5: The particle trail that displays when a player draws a circle with his or her finger to create a wormhole.

Solving this challenge meant first iterating over a list of touch points that are determined to be part of the same touch event. Next, Gannon's code computes the signed area of each right-angled triangle created by two adjacent points in the list. The sum of the handedness score is computed and if the score is positive, the touch event is deemed to be clockwise; if it's negative, the touch event is counterclockwise. Figure 6 is the code, along with an assigned area function that Gannon found after searching on the Stack Exchange Q&A website.

        private void UpdateOrientation()
        {
            int pointCount = this.pointHistory.Count;
            if (pointCount < 3)
                return;

            this.handednessScores.Clear();

            for (int i = 0; i < pointCount - 1; i++)
            {
                // Sum over the edges (x2-x1)(y2+y1)
                Point p1 = this.pointHistory[i];
                Point p2 = this.pointHistory[i + 1];
                int score = (p1.X * p2.Y) - (p1.Y * p2.X);
                this.handednessScores.Add(score);
            }

            int sumScores = this.handednessScores.Sum();
            this.PortalOrientation = sumScores > 0 ? PortalOrientation.Clockwise : PortalOrientation.Anticlockwise;
        }

Figure 6: The code for determining whether the circles creating the wormholes are drawn clockwise or counterclockwise. (In the game, wormholes behave differently depending on which way they are drawn.)

Another challenge had to do with how the flippers work. As any pinball wizard knows, flippers need to operate over a fairly limited range of motion. Gannon struggled a bit with joint limits defined in Farseer. Occasionally, he said, earlier iterations of his flippers would "bust through the limits and end up flopping about on the wrong side." Gannon added the code in Figure 7 as a hack that he's fairly proud of. "It ended up working rather well," he said. "It even adds some bounce to the flippers by reversing the angular velocity when the limit is reached and multiplying by the variable bounce, which is very close to but not quite 1.0, to prevent the flippers from bouncing forever."

        private void BounceOffLimits()
        {
            float angularVelocity = this.Body.AngularVelocity;
            float bounce = -2.9f;
            if (this.Body.Rotation <= this.Joint.LowerLimit)
            {
                this.Body.Rotation = this.Joint.LowerLimit + 0.0001f;
                this.Body.AngularVelocity = angularVelocity * bounce;
            }
            if (this.Body.Rotation >= this.Joint.UpperLimit)
            {
                this.Body.Rotation = this.Joint.UpperLimit - 0.0001f;
                this.Body.AngularVelocity = angularVelocity * bounce;
            }
        }

Figure 7: The code controlling the range of motion and bounce of the flippers.

Gannon used the Windows* 8 API to harvest touch data, storing information in a buffer about 10 touches deep. Other challenges included deciding what language and framework to use. Gannon wrote the app in C# using Microsoft's XNA* Framework. He followed the standard component model, with top-level components and services and associated subcomponents (for individual parts of the game, such as the ball or flippers) further down the hierarchy.

Gannon said that one drawback to building the app in XNA is that Microsoft counts these as desktop apps and therefore doesn't publish them in the Windows Store, where most Windows 8 apps are sold. One of the first tasks undertaken by Null Ref, Gannon’s newly formed company with fellow AIC 2013 contestant Adam Hill, might well be to port Wormhole Pinball to MonoGame for wider coverage, but only after Gannon and Hill work to polish up Hill's Hot Shots as their first commercial offering.

Advice for Other Developers

Gannon said most of his advice to developers boils down to appropriately scoping the project and not being afraid to ditch features that don't fit. It’s also important to remember how AIO devices might be used when they are laid flat on a table. There's no up or down for someone playing Wormhole Pinball, and the experience needs to be more or less the same no matter where a player is positioned around the screen. Obviously notions of up and down are central to the user experience of most apps built for smaller touch-screen devices. 

Figure 8: Graphic indicating winner of a Wormhole Pinball game.

Perhaps most importantly, Gannon said that developers need to be aware of some of the cyclical trends apparent in the gaming industry, especially the resurgence of simple games created by small teams of developers for touch screens that end up reaching huge audiences. (Candy Crush Saga*, anyone?) Gannon notes that the early 1980s, when he was pecking out BASIC commands on his Commodore 64, also happened to the heyday of the 8-bit era of games.

“In some ways it's gone full circle back to the 8-bit days, when you used to get guys making a game on their own in their bedroom and selling it and making a lot of money,” he said. “Then it all started getting a bit more complicated and you needed bigger and bigger teams. Now the software and the tools have gotten to the stage where they're much easier to use, and it's now easy for someone like me to sit down, write a game on my own, and get it out there.”

Helpful Resources

Gannon used the free, open source Farseer Physics Engine to build the dynamic game elements. Farseer is a collision-detection system with realistic physics responses. Features of the engine include continuous collision detection (with time-of-impact solver); contact callbacks: begin, end, pre-solve, post-solve; convex and concave polygons and circles; and more. When stumped by a particular programming challenge, Gannon spent timing mining Stack Overflow, a Q&A site for professional and enthusiast programmers. He wrote the app in C# using the Microsoft XNA Framework. Currently he is exploring porting Wormhole Pinball to MonoGame, an open source implementation of the Microsoft XNA 4 Framework that allows for easy porting of games to iOS*, Android*, and other operating systems.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android, Intel® RealSense™ technology, and Windows to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

Related articles:

 

Intel, the Intel logo, Intel Core, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2014. Intel Corporation. All rights reserved.


KAGURA: Blending Gesture and Music into a Bold New UI

$
0
0

Download as PDF

By William Van Winkle

Thanks to a host of mainstream innovations, the world is increasingly interested in perceptual computing. Intel is throwing open the gates and directing traffic into this new ballpark through the Intel® Perceptual Computing Challenge. This year’s Phase 2 grand prize winner, KAGURA, is a music app that delivers a fascinating spin on music creation, blending sampling with a drag-and-drop interface and an inspired use of camera depth to differentiate user functions. KAGURA lets anyone make music with an intuitive, game-like ease (Figure 1) that is wholly new.

Figure 1: Hit it! The KAGURA interface lets users activate instrument sounds with their bodies or even by “striking” them with objects.

Shunsuke Nakamura and his team at Shikumi Design, a forward-thinking outfit founded in 2005, wanted to design a system where PC sensors and musical enjoyment might overlap. Nakamura explained, “We wanted to target people who cannot play musical instruments but are interested in performing music. By having these people move their bodies, it is possible to go beyond practicing music. We wanted to let them produce and arrange.” The end result, KAGURA, is a drag-and-drop marvel of motion-based music creation. The application comes with a host of ready-made instrument sounds that users can place on the interface as icons that overlay the user’s live image as seen by the host system’s camera. When the user passes a hand over the icon, or perhaps whacks it with a mallet, that sound gets played.

KAGURA Form and Function

There’s more going on in KAGURA’s UI, and more subtle refinement of that interface, than may meet the eye. The most innovative aspect of KAGURA, though, may be its use of distance as measured by the 3D-based Creative Interactive Gesture Camera. Volume is controlled through the user’s distance from the camera, and when the user’s hand is less than 50 cm (about 20 inches) from the lens, the program enters a special mode (Figure 2) that makes it appear as if the hand is reaching below a water surface—a novel and visually striking element.

Figure 2: Part of the genius in KAGURA’s interface is its separation of several controls into a “near-mode” that is reachable only by putting a hand close to the camera and through a virtual sheet of water.

Diving into Water

Shikumi wanted a way to differentiate between “near” and “far” controls. This approach would deliver a simpler, more intuitive interface and meet Nakamura’s objective to eliminate the need for a manual or lengthy feature descriptions. The team decided that 50 cm from the camera was a good distance at which to place a virtual barrier to distinguish between the two distances. Nakamura originally wanted a “film” effect, but his team had previously developed the physics computations for rendering water, so they took the seemingly easier route and recycled their prior work. However, it didn’t turn out to be that easy.

“Our programmer had actually given up, saying, ‘This is impossible,’” noted Nakamura. “But we decided to push forward a bit more with a final effort, and then we finally achieved success. With the water effects, the result felt very good. Our process steps were to perform image processing, divide the depth data value into two at 50 cm from the camera, create wave-by-wave equations, distort the image based on wave height, and add color to the part of wave closest to the player.”

The near- and far-mode paradigms were one way to work around the accuracy challenges of KAGURA. Using distance, Shikumi segregated playing from object manipulation to give the user full functionality from a single UI screen and greatly reduce unintentional user errors. However, making the water effect (Figure 3) look convincing was no simple task.

Figure 3: A considerable amount of graphical abstraction and computation went into creating KAGURA’s water effect.

Nakamura presented the following explanation of how the designers compute waves:

A wave equation with damping force and restoring force is expressed as follows:

where u is the displacement (Figure 4), D is the damping coefficient, K is the stiffness coefficient, and c is the wave velocity. By means of a Taylor expansion,

the following approximate expression can be obtained:

Similarly for x and y,

If we assume , we obtain the following expression:

Equivalent code may be as follows:

Figure 4: Each displacement u is calculated by its past values and neighbor values.

Refraction

We assume the distortion is approximately in proportion (Figure 5) to the gradient of the wave, .

Figure 5: If the camera image is set at the back of the wave, as shown here, the camera image will look distorted.

Implementation

The steps to the image processing are as follows:

1. Get the depth image.

The depth image can be obtained through the Intel® Perceptual Computing SDK. Each pixel of depth image indicates the distance (mm) from the camera. Depth image can be mapped to color coordinates using a UV map.

2. Binarize the depth image.

For each pixel src(i), if src(i) < threshold, we assume its position is in near-mode region and assign dst(i) = 255. If not, we assume its position is in far-mode region and assign dst(i) = 0.

3. Dilate the binary image.

Apply dilation to remove noise and expand the near-mode region so that the region looks clear.

4. Apply the wave equation.

The wave image is a floating-point image, and its pixel value is normalized in the range [-1,1]. If the position is in the near-mode region, the output value is forcibly assigned an upper limit of 1. If the position is in the far-mode region, the output value is calculated by the wave equation discussed above.

5. Apply the refraction.

From the wave and camera images, we can obtain a refracted image, as shown in Figure 5.

6. Color the region.

Color the far-mode region as water (Figure 6) so that the near-mode region looks clear.

Figure 6: The process steps between camera input and final effect rendering.

Challenges Addressed During Development

Nakamura notes that his team “did not struggle much” when creating KAGURA; however, they did struggle with interface accuracy. Specifically, capturing an on-screen icon originally proved difficult because achieving fingertip precision through a camera into a virtual space located several feet away was imprecise at best. Swinging to the other extreme by requiring much less accuracy would result in users unintentionally grabbing icons.

Naturally, the team sought to find a good compromise, but ultimately they had to decide on which side to err: precision or occasional unintended grabs? After much user testing, Nakamura finally opted for the latter. “We thought people would feel less stress with unintentional actions than intending to perform an action and failing at it.”

This decision yielded an interesting revelation: The object was not to eliminate errors for the user. Rather, the user needed to have an understanding of the intended outcome for an enjoyable experience. So long as the UI conveyed that an icon drag-and-drop should be possible, the user would be content with more than one attempt, provided that success was soon achieved.

Lessons Learned and Looking Ahead

Not surprisingly, the top lesson Nakamura passes on to aspiring gesture developers is to strive for more of a “general sense” than true gesture precision. “There will be constraints involving processing if precise actions are taken,” he said. “Rather than being accurate, it is important to be able to communicate what gesture is desired. Also, developers need to understand that this communication is part of the entertainment.”

Nakamura maintains that the touch screen model is no longer sufficient for some applications. With the advent of affordable 3D cameras and perceptual computing, designers need to start building depth into their models. Depth presents an additional stream of information that should be utilized to enable new functionality and improve experiences whenever possible. Thus, the image quality of the camera matters. As camera resolution and sensor quality improve, Nakamura expects developers to have more control and flexibility in their designs. Until then, the burden of finding workarounds remains on developers’ shoulders.

As for KAGURA, Nakamura would like to see the program grow richer, both in what it offers and the ways in which users can customize it. Currently, KAGURA offers only four musical background styles; this could be easily expanded. Similarly, users might be able to import their own plug-in instruments. Even the interface might change to allow users to bring their own background artwork.

Shikumi will continue to explore and grow its business with perceptual computing. With a grand-prize-winning application on their hands, Nakamura and his company are off to an inspiring start that he hopes will beckon many others to follow.

Resources and Tools

Shikumi made extensive use of the Intel® C++ Compiler (part of Intel® C++ Studio XE) and the Intel® Perceptual Computing SDK, which the developers found to be fast and optimal for the job. The base system of the program, such as device I/O, image processing, and sound processing, was written in C++. More internal elements, such as graphics, sounds, motion, effects, and user interactions, were written in Lua. The OpenCV library supplied basic image processing while OpenGL* served for graphics drawing.

Nakamura added, “When we use image processing outside of OpenCV, we use C++ for the basics because processing speed is required. When adjustment is necessary, we export the C++-based module to Lua and describe in Lua.”

Shikumi also made use of Eclipse* and Microsoft Visual Studio*.

About Shikumi Design and Shunsuke Nakamura

Shunsuke Nakamura is the founder and director of Shikumi Design, Inc., a software developer focused on design and interactive technology. Nakamura received his PhD in Art and Technology (Applied Art Engineering) from the Kyushu Institute of Design in 2004. That same year, he became an associate professor at the Kyushu Institute of Technology, where he continues to teach today. In 2005, Nakamura started Shikumi Design, bringing several of his junior researchers from the school. The group began winning industry awards in 2009, but taking the grand prize in Intel’s Perceptual Computing Challenge against 2,800 competitors across 16 countries marks their greatest competitive achievement so far.

Intel® Real-Sense™ Technology

Developers around the world are learning more about Intel® RealSense™ technology. Announced at CES 2014, Intel RealSense technology is the new name and brand for what was Intel® Perceptual Computing technology, the intuitive, new user interface SDK with functions such as gesture and voice that Intel brought to the market in 2013. With Intel RealSense technology users will have new additional features, including the ability to scan, modify, print, and share in 3D, plus the technology offers major advances in augmented reality interfaces. These new features will yield games and applications where users can naturally manipulate and play with scanned 3D objects using advanced hand- and finger-sensing technology.

Intel® Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android*, Intel RealSense technology and Windows* to download tools, access dev kits, share ideas with like-minded developers, and participate in hackathons, contests, roadshows, and local events.

Related Articles

Head of the Order* Casts a Spell on Perceptual Computing

Contest Winners Combine Augmented Reality with an Encyclopedia with ARPedia*

How to Integrate Intel® Perceptual Computing SDK with Cocos2D-x

Using Intel® Perceptual Computing SDK in the Unity* 3D Environment

Voice Recognition and Synthesis Using the Intel® Perceptual Computing SDK

LinkedPIXEL Wins Intel® Perceptual Computing Challenge with Gesture-based Drawing Application

 

 

Intel, the Intel logo, and Intel RealSense are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2014. Intel Corporation. All rights reserved. 

Paint Your Music: A Table-Top, Multi-Touch-Enabled Virtual Musical Instrument/Game

$
0
0

By:  Dominic  Milano

Download PDF

 

 

Innovations in computing form factors such as All-in-One (AIO) and tablet devices that combine desktop-like performance with multi-touch-enabled, high-resolution screens are giving people fun new ways to experience making music and giving new meaning to the often-said statement that music is a universal language.

Recognizing this trend, TheBestSync, a China-based software company focused on integrated software and hardware solutions, entered the Intel App Innovation Contest 2013 (Intel AIC 2013) in partnership  with Intel® Developer Zone with the idea of combining game and music technology with an AIO device to create an exciting new way for people to play and enjoy music.

TheBestSync and its CEO, Alpha Lam, have been on a mission to create innovative, interactive experiences for the entertainment market for the last three years and are no strangers to Intel Developer Zone contests. They recently took the grand prize in the Intel® Perceptual Computing Challenge with their submission, JOY*, one of the first virtual music instruments built using the Intel Perceptual Computing SDK (now the Intel® RealSense™ SDK). JOY not only won the Challenge, but it also became the inspiration for TheBestSync’s most recent app submission, Paint Your Music,  into the AIC 2013 contest,  which won the Entertainment category for AIO devices running Microsoft Windows* 8.1.

Created specifically for the Intel AIC 2013, which challenged contestants to “Dream up an interactive entertainment experience that helps make the all-in-one an endless adventure,” Paint Your Music (PYM) combines multi-touch, multi-player-enabled interaction with a musical matrix game board, virtual “paint balls,” and a scoring mechanism to create a unique, fun-to-play virtual musical instrument/game.

Painting Music

As Intel AIC 2013 finalists, TheBestSync received a Lenovo IdeaCentre Horizon 27* Table PC AIO to code PYM on. The system consisted of a 27-inch 10-point touch screen, Windows 8.1, an Intel® Core™ i7 processor, 8 GB RAM, and NVIDA GPU.

Lam and TheBestSync team were excited by the Lenovo AIO’s gigantic touch screen, which played a key role in PYM. The touch screen has the ability to lay flat, giving players an arcade-style, table-top game experience. “Being able to lay it flat was especially helpful,” he said. “Having such a large touch screen lets multiple people engage with it at the same time, so it’s very conducive to creating immersive environments. Plus, the Lenovo’s 8-cell lithium-polymer battery lets users run the device for up to two hours, adding a dimension of mobility to an otherwise large device.”

The AIO’s high-fidelity audio capabilities further enhance gameplay, which allows users to create their own music.

In “recreational” single-player mode, PYM functions as a virtual 3D musical instrument. Using touch, the player places “music note balls” on a matrix. Music note balls act as graphic representations of notes and behave like virtual paintballs, triggering notes and “paint dances.” The AIO’s multi-touch-enabled screen lets the player position notes with one hand and “spin” the matrix in 3D space (Figure 1).

PaintYourMusic Figure 1
Figure 1: In single-player mode, notes are positioned on a matrix by touching the screen. Users can change the angle of view in 3D space, where notes appear as pillars of color or animated “paint dances.”

In “competitive” two-player mode, players launch music note balls toward their opponents’ “music wall.” As the balls land on the matrix they produce notes. Melodies result as more notes are launched over the course of a game. Points are scored when “props” are stuck by a ball. All the music is created on-the-fly in real time and can be recorded and played back to make music.

Programming Unity

PYM was built using the Unity 3D Engine*. Programming was handled with Microsoft Visual Studio* and C#, an object-oriented language that the team preferred over C++ because of its seamless compatibility with Unity. For PYM’s audio capabilities, TheBestSync team used the interactive framework they refined while developing JOY. The framework is based on the audio engine that ships with Unity and includes additional digital signal processing audio effects, such as reverb and echo (delay), as well as the ability to record and play back music created during gameplay.

Before founding TheBestSync, Lam spent a decade running a music production house, which afforded the PYM development team ready access to an extensive, proprietary library of instrument sounds that they were able to plug into Unity’s step sequencer. The library supports high-fidelity 16-bit, 24 kHz MP3 and WMA sound files as well as 24-bit, 48 kHz WAV files that were created using Avid Pro Tools* and Apple Logic* Pro digital audio workstation software.

PaintYourMusic Figure 2
Figure 2: Paint Your Music’s multi-player UI. (A) The music matrix. (B) Music note balls. (C) Red player “prop” launcher. (D) Blue player music wall. (E) Elapsed time display and a menu for accessing settings, choosing what kind of applause you’ll hear, and other items.

PYM’s user interface combines 2D and 3D assets. The 3D content was produced using Autodesk Maya*, and to simulate “paint dances,” the team used Unity’s particle system and physics engine.

Challenges and Solutions

Achieving a balance between great-looking visuals and real-time performance proved to be the team’s biggest challenge. The particle system was particularly thirsty for CPU horsepower. To achieve higher frame rates, PYM’s code took advantage of both the on-chip GPU in the Intel Core i7 processor and the Lenovo AIO’s NVIDIA GPU (Figure 3). To further boost performance, the team reduced the number of simultaneous particles from 2,500 to about 1,000 and adjusted the size and texture of the particles.

PaintYourMusic Figure 3
Figure 3: Paint Your Music achieves 9 fps playback with the NVIDIA GPU engaged (left) and 15 fps when both the NVIDIA GPU and Intel® HD Graphics are used (right).

In addition to fine-tuning the particle system, TheBestSync team used multi-threading to enable PYM to run multiple processes—graphics, sound, physics—across multiple cores and threads. “PYM’s real-time 3D graphics and music would not have been possible without the Intel® processor built into the Lenovo AIO platform,” said Lam.

Multi-touch support was another hurdle that TheBestSync had to overcome. The Unity 3D engine doesn’t support multi-touch natively, so the team used C# to script a plug-in that references the Windows 8 Touch API. The team followed Microsoft’s suggested best practices for setting touch-target sizes. “The Lenovo all-in-one’s touch screen is huge, so while the size of touch-targets can be problematic on small form-factor devices, it wasn’t an issue for us,” said Lam.

Testing, Testing, Testing

Intel® Graphics Performance Analyzers (Intel® GPA) was used to diagnose and debug PYM. Intel GPA’s ability to display CPU and GPU usage visually helped the team quickly identify bottlenecks in the Microsoft DirectX*-based graphics pipeline. The team used Unity 3D’s Profiler window to handle the general debugging of code related to memory usage, audio, physics, and rendering.

Formal, commercial-level beta testing has not yet been conducted.

Next Steps

TheBestSync plans to launch PYM by the end of 2014. Its goals include:

  • Improving the user experience
  • Implementing Intel RealSense technology-based interaction, for example, touch-free gestural input, facial recognition, and the ability to detect a player’s emotions during gameplay
  • Adding support for online play.

In addition, Lam hopes to add support for OpenGL* and OpenGL ES* to enhance the graphics pipeline’s performance on mobile platforms.

Another feature the company plans to implement is in-app purchasing using different device-specific payment SDKs. Currently, TheBestSync uses the Microsoft Azure* platform for distributed cloud services, but they’re investigating alternatives such as the Amazon Web Services cloud platform.

Distribution will be handled through the Microsoft App Store and other similar app stores.

Lam is considering migrating PYM to other mobile platforms, which will present new challenges, not the least of which is scaling the app’s UI and graphics to accommodate small form-factor devices and screens, devices of varying capabilities, and other fragmentation issues.

About the Developer

Alpha Lam studied applied electronics and spent 10 years in music production before founding TheBestSync. The company currently employs 20 people. The PYM team included three software developers, three 3D graphics programmers, and an audio engineer.

Helpful Resources

Alpha Lam has been a member of CSDN, the largest developer community in China, almost from its inception. CSDN proved invaluable in helping Lam and his team research new technologies, including the Intel RealSense SDK, which he plans to use to enhance PYM in the future.

Intel Developer Zone offers tools and how-to information for cross-platform app development, platform and technology information, code samples, and peer expertise to help developers innovate and succeed. Join our communities for the Internet of Things, Android*, Intel RealSense technology and Windows* to download tools; access dev kits; share ideas with like-minded developers; and participate in hackathons, contests, roadshows, and local events.

 

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Overview of Ultrabook™ 2 in 1 with Recommendations for Mode-Aware App Development

$
0
0

Download as PDF

Introduction

Ultrabook™ 2 in 1s are PCs that can transition from laptop mode (for productivity needs) to tablet mode (for consumption needs). In this article, you can explore the hardware capabilities of 2 in 1 devices, opportunities that Windows* 8/8.1-based 2 in 1 devices provide to both consumer and enterprise applications, and recommendations for developing apps for the best user experience on each mode.

Scope

  • Windows 8/8.1 Desktop Application Development
  • 2 in 1 Device Classification (what is and is not a 2 in 1)
  • 2 in 1 Usage Models
  • 2 in 1-Mode Aware Applications
  • Recommendations for Developing Applications for Ultrabooks
  • Considerations for 2 in 1-Aware Application Readiness

What are Ultrabook™ 2 in 1s?

An Ultrabook 2 in 1 is a PC when you need it and a tablet when you want it. It has the same power and performance as a desktop PC or a traditional laptop, allowing you to perform all of the productivity and data creation tasks needed without compromising power or performance. Capabilities like touch and sensors and input devices like a stylus are also available on most 2 in 1 devices. And when users are on the go and want to do mobile (consumption) usages, like watch a movie, play games, check email, or read a book, they can switch to tablet mode, which is not only more convenient, but makes it unnecessary to have multiple devices. The extended battery life that is required on 2 in 1s adds even more value.

To be considered a true 2 in 1 device, the keyboard must be engineered for an integrated 2 in 1 design and must convert or detach into flat tablet usage. Devices with an aftermarket keyboard do not qualify as an Ultrabook 2 in 1. Additional criteria for Ultrabook 2 in 1s include a robust operating system like Windows 8/8.1 and a screen size of at least 10” or larger.

The following table shows some of the 2 in 1 device configurations currently available.

Table 1:Available Variations on 2 in 1 Device Configurations

Form Factor

Tablet Mode

Laptop Mode

Image of Model

Folder

181⁰-360⁰

0⁰-180⁰

Ferris Wheel

Screen facing outwards

 

 

Screen facing the keyboard

Slider

Screen covering any part of keyboard

 

 

Screen not covering any part of keyboard

Swivel

Closed lid position with screen facing up OR Stand mode with screen facing up.

Any other positions

Detachable

Screen detached from keyboard OR

keyboard attached with screen not facing the keyboard  

 

Attached with screen facing keyboard

Dual Screen

Closed lid position with cover facing up

OR

Stand mode with cover / screen  facing up

Screen facing the keyboard

What Isn’t an Ultrabook 2 in 1?

Just having a detachable display and keyboard doesn’t mean that a device can be classified as an Ultrabook 2 in 1. Below are some of the devices that are not included in the 2 in 1 classification:

  • Touch-Enabled Laptops: While touch is a new feature on many laptops, it doesn’t meet the 2 in 1 definition because the device doesn’t convert from a laptop to a tablet and many aren’t lightweight or have other tablet-like capabilities.
  • Android*/iOS*/Win RT Slate Tablets: These devices have an aftermarket keyboard that does not offer multi-tasking, application, and peripheral compatibility and thus cannot be classified as 2 in 1 devices.
  • All in One Devices: While innovative and prominently used for gaming applications, these devices do not offer the capabilities of a laptop and tablet built into one device.
  • Phablet: The form factor that intersects phone and tablet capabilities does not offer the same processing power, graphics capabilities, and battery life as 2 in 1 devices and thus cannot be classified as such.

Usage Model Examples

Now that you understand the device characteristics of 2 in 1 devices, let us explore some examples of the usage models that are supported on these devices. 2 in 1 devices change the paradigm for PC-based applications. In a world where we used to develop separate applications for productivity and consumption usages, 2 in 1s now provide a flexible platform to develop one application to meet both needs. The examples below illustrate some possibilities for consumer and enterprise applications on Ultrabook 2 in 1s.

Consumer Usages

  • Multi-tasking versus single application usage: Laptop mode allows for multitasking between many applications, whereas multitasking is less likely in a tablet mode. Note, however, that this distinction is based on form factor usages in Windows 8 Desktop mode. Windows Modern UI applications have to adhere to the Windows 8 sandboxing guidelines.
  • Games: Interactive games supporting keyboard/joysticks in laptop mode and touch-enabled overlay controls in tablet mode (Ex: GestureWorks* GamePlay)
  • Data creation versus consumption: Including productivity and consumption usages in the same application requires additional design considerations.

Table 2 lists some of the usages for the two modes and shows differences and requirements for developing applications for 2 in 1 devices:

Table 2: Comparison of Laptop and Tablet Modes for Consumer Usage

Laptop Mode

Tablet Mode

Browse and edit video and audio content (Ex: Magix Video*/Movie Edit Family* www.magix.com)

 

Play video and audio content

Make advanced edits using keyboard/mouse (Ex: Krita Gemini Application; www.kogmbh.com/download.html)

 

Paint using touch/stylus

Purchase a product

 

Browse for products

Type an email

 

Browse email

Use productivity applications to create documents or spreadsheets

 

Browse documents  or spreadsheets

Add comments or type a response on social networking sites

Browse a social networking site for updates

Create documents using keyboard/mouse

Read documents (using touch for navigation and stylus for taking notes)

 

 

Enterprise Usages

Using multiple devices to accomplish different tasks is common in many organizations. For example, a sales representative may use a laptop for detailed data analysis but carry a tablet to take notes during customer visits. 2 in 1 devices reduce the need to have multiple devices. The table below provides some examples of enterprise usages for Ultrabook 2 in 1s.

Table 3:Comparison of laptop and tablet modes for Enterprise Usage

Category

Laptop Mode

Tablet Mode

Education

Take notes in class, create study tools like flashcards, write papers, create presentations and copy and paste notes from multiple sources

 

Consume data, like reading books and watching lectures while simultaneously taking notes with a stylus

Health Care

Enter chart notes using keyboard and mouse

Enter chart notes using a stylus

 

Look up patient diagnostics using touch and stylus

 

Retail

Place orders, do in-depth sales or inventory analysis

Search inventory for customers, help locate stock, even complete sales transactions on the show floor to reduce customer wait times

 

 

2 in 1-Aware Applications

Following are some examples for the type of experiences 2 in 1-aware applications could provide:

  • Transformative UI:  Applications that support a seamless change in UI to support the duality of 2 in 1 devices provide a good user experience. An application with a traditional PC-type UI while in laptop mode, but with a drastically touch-optimized UI in tablet mode is bound to provide a great UX. The UI switch can be automatic or manual as explained in the recommendations section below.
  • Applications that provide a touch overlay on existing apps: This is suited for gaming applications that were designed for Windows* 7 and involved interactivity through a keyboard/joystick. While designing and developing for Windows 8, which is a touch-first OS, implementing an overlay with touch controls can provide a good user experience in both clamshell and slate modes.

Recommendations for Developing Applications for Ultrabook Devices

If you have existing apps on Windows 7 or Windows* XP, you may also want to consider the following development guidelines to help ensure a smooth transition when writing applications for the Ultrabook and 2 in 1 platforms.

Considerations for 2 in 1-Aware Application Readiness

The nature of how an application transitions from laptop mode to tablet mode is specific to each application. However, every 2 in 1-aware application must be able to: 1) detect state changes and 2) implement functionality to support the state change. Using the usage models and requirements for 2 in 1 awareness in the previous section as a reference, here are some recommendations to consider when developing applications for 2 in 1 devices.

  • Detecting a state change: Table 1 listed some 2 in 1 devices and showed how they offer state changes in the hardware. Detecting state changes in the application, however, has some dependencies in the BIOS and firmware and is not supported on all devices in the market today. In preparing your application for 2 in 1 device-enabling, we encourage you to understand how to use the Windows 8 API to detect a state change from laptop to tablet and vice versa. More information on how to detect this change is provided in the articles below:
  • Develop different UIs for laptop and tablet mode: The ways in which users interact with 2 in 1 devices in laptop and tablet modes differ significantly. While it is common to use a mouse and keyboard as primary input methods in laptop mode, touch and a stylus are preferred in tablet mode. A good consideration to enhance user experience would be to develop separate UIs for laptop and tablet modes.
  • Write code to manually detect state changes: Since a device state change may not be automatically available, a good starting point may be to provide a manual switch within the application. This will ensure that your application requires less development efforts when the automatic state detection support becomes available.
  • Provide touch-enabled game overlays for a great user experience: For PC games, adding a touch-enabled overlay control is an effective way of transforming the app to provide a good UX on 2 in 1 devices.
  • Optimize the application to handle multiple inputs: Touch/stylus/keyboard and mouse provide effective fallback mechanisms where one or more inputs are not available. For example, not all 2 in 1 devices support a stylus. If your application uses functionality to use the stylus, include fallback mechanisms where these features work with touch on devices that do not support a stylus.

Summary

While Ultrabook devices are entering the mainstream market, 2 in 1 devices are a growing market segment. Application developers are encouraged to consider the opportunities and possibilities that these devices offer for application development, use the recommendations provided in this article, and prepare their applications for these devices, thereby reducing the effort and TTM to get their applications to the end user.

About the Author

Meghana Rao is a Technical Marketing Engineer in Intel’s Software and Services division. She is involved in helping developers understand the capabilities of Intel platforms and writing applications that provide a good user experience.

 

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright © 2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.

Controlling floating-point modes when using Intel® Threading Building Blocks

$
0
0

Intel® Threading Building Blocks (Intel® TBB) 4.2 Update 4 introduced enhanced support for managing floating-pointing settings. Floating-point settings can now be specified at the invocation of most parallel algorithms (including flow::graph). In this blog I want to pay attention to some peculiarities and details of the new feature and overall floating-point settings support in Intel TBB. This blog is not devoted to general floating-point support in the CPU. If you are not familiar with floating-point calculation support in the CPU I’d suggest starting with the Understanding Floating-point Operations section in Intel® C++ Compiler User and Reference Guide, or for more information on the complexities of floating-point arithmetic, the classic “What every computer scientist should know about floating-point arithmetic”.

Intel TBB provides two approaches to allow you to specify the desired floating-point settings for tasks executed by the Intel TBB task scheduler:

  1. When the task scheduler is initialized for a given application thread, it captures the current floating-point settings of the thread;
  2. The class task_group_context has a method to capture the current floating-point settings.

Consider the first approach. Basically this approach is implicit: the task scheduler always and unconditionally captures floating-point settings at the moment of its initialization. The saved floating-point settings are then used for all tasks related to this task scheduler. In other words this approach can be viewed as a property of a task scheduler. Since it is a property of a task scheduler it gives us two ways in which we can apply and manage floating-point settings in our application:

  1. A task scheduler is created for each thread so we can launch a new thread, specify the desired settings and then initialize new task scheduler (explicitly or implicitly) on this thread which will capture the floating-point settings;
  2. If a thread destroys a task scheduler and initializes a new one then new settings will be captured. Thus you may specify new floating-point settings before recreation of a task scheduler and when new task scheduler is created, new floating-point settings will be applied for all tasks.

I’ll try to show some peculiarities with a set of following examples:

Notation conventions:

  • “fp0”, “fp1” and “fpx” – some states describing floating-point settings;
  • “set_fp_settings( fp0 )” and “set_fp_settings( fp1 )” – set floating point settings on a current thread;
  • “get_fp_settings( fpx )” – get floating point settings from current thread and store the settings to “fpx”.

Example #1. A default task scheduler.

// Suppose fp0 is used here.
// Every Intel TBB algorithm creates a default task scheduler which also captures floating-point
// settings when initialized.
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp0 will be used for all iterations on any Intel TBB worker thread.
} );
// There is no way anymore to destroy the task scheduler on this thread.

Example #2. A custom task scheduler.

// Suppose fp0 is used here.
tbb::task_scheduler_init tbb_scope;
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp0 will be used for all iterations on any Intel TBB worker thread.
} );

Overall example #2 has the same effect as example 1 but it opens a way to terminate the task scheduler manually.

Example #3. Re-initialization of the task scheduler.

// Suppose fp0 is used here.
{
    tbb::task_scheduler_init tbb_scope;
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp0 will be used for all iteration on any Intel TBB worker thread.
    } );
} // the destructor calls task_scheduler_init::terminate() to destroy the task scheduler
set_fp_settings( fp1 );
{
    // A new task scheduler will capture fp1.
    tbb::task_scheduler_init tbb_scope;
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp1 will be used for all iterations on any Intel TBB worker
        // thread.
    } );
}

Example #4. Another thread.

void thread_func();
int main() {
    // Suppose fp0 is used here.
    std::thread thr( thread_func );
    // A default task scheduler will capture fp0
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp0 will be used for all iterations on any Intel TBB worker thread.
    }
    thr.join();
}
void thread_func() {
    set_fp_settings( fp1 );
    // Since it is another thread, Intel TBB will create another default task scheduler which will
    // capture fp1 here. The new task scheduler will not affect floating-point settings captured by
    // the task scheduler created on the main thread.
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // fp1 will be used for all iterations on any Intel TBB worker thread.
    }
}

Please notice that Intel TBB can reuse the same worker threads for both “parallel_for”s despite the fact that they are invoked from different threads. But it is guaranteed that all iterations of parallel_for on the main thread will use fp0, and all iterations of the second parallel_for will use fp1.

Example #5. Changing floating-point settings on a user thread.

// Suppose fp0 is used here.
// A default task scheduler will capture fp0.
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp0 will be used for all iterations on any Intel TBB worker thread.
} );
set_fp_settings( fp1 );
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp0 will be used despite the floating-point settings are changed before Intel TBB parallel
    // algorithm invocation since the task scheduler has already captured fp0 and these settings
    // will be applied to all Intel TBB tasks.
} );
// fp1 is guaranteed here.

The second parallel_for will leave fp1 unchanged on a user thread (despite the fact that it uses fp0 for all its iterations) since Intel TBB guaranties that an invocation of any Intel TBB parallel algorithm does not visibly modify the floating-point settings of the calling thread, even if the algorithm is executed with different settings.

Example #6. Changing floating-point settings inside Intel TBB task.

// Suppose fp0 is used here.
// A default task scheduler will capture fp0
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    set_fp_settings( fp1 );
    // Setting fp1 inside the task will lead undefined behavior. There are no guarantees about
    // floating-point settings for any following tasks of this parallel_for and other algorithms.
} );
// No guarantees about floating-point settings here and following algorithms.

If you really need to use other floating-point settings inside a task you should capture the previous settings and restore them before the end of the task:

// Suppose fp0 is used here.
// A default task scheduler will capture fp0
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    get_fp_settings( fpx );
    set_fp_settings( fp1 );
    // ... some calculations.
    // Restore captured floating-point settings before the end of the task.
    set_fp_settings( fpx );
}
// fp0 is guaranteed here.

The task scheduler based approach to managing floating-point settings is suitable for the majority of problems. But imagine a situation where you have two parts of a calculation that require different floating-point settings. It goes without saying that you may use the approaches demonstrated in examples ##3, 4. But you may face some possible issues:

  1. Difficult implementation, e.g. in example #3, you cannot manage the lifetime of the task scheduler object or in example #4, you may need to use some synchronization between the two threads;
  2. Performance impact, e.g. in example #3, you must reinitialize the task scheduler while that was not necessary before, or in example #4, you may face over-subscription issues.

And what about nested calculations with different floating-point settings? With the task scheduler based approach to managing them is not a trivial task since it will force you to write a lot of useless code.

Thus, Intel TBB 4.2 U4 introduced a new task_group_context based approach. task_group_context was extended to manage the floating-point settings for tasks associated with it though the new method

void task_group_context::capture_fp_settings();

which captures the floating-point settings from the calling thread and propagates them to  its tasks. This allows you easily specify the required floating-point settings for a particular parallel algorithm:

Example #7. Specifying floating-point settings for a specific algorithm.

// Suppose fp0 is used here.
// The task scheduler will capture fp0.
task_scheduler_init tbb_scope;
tbb::task_group_context ctx;
set_fp_settings( fp1 );
ctx.capture_fp_settings();
set_fp_settings( fp0 );
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // In spite of the fact the task scheduler captured fp0 when initialized and the parallel
    // algorithm is called from thread with fp0, fp1 will be here for all iterations on any
    // Intel TBB worker  thread since task group context (with captured fp1) is specified for this
    // parallel algorithm.
}, ctx );

Example #7 is not very interesting, since you can achieve the same effect if you specify fp1 before the task scheduler initialization. Let me consider our imaginary problem with two parts of calculation which requires different floating-point settings. The problem can be solved like this:

Example #8. Specifying floating-point settings for different part of calculations.

// Suppose fp0 is used here.
// The task scheduler will capture fp0.
task_scheduler_init tbb_scope;
tbb::task_group_context ctx;
set_fp_settings( fp1 );
ctx.capture_fp_settings();
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // In spite of the fact that floating-point settings are fp1 on the main thread, fp0 will be
    // here for all iterations on any Intel TBB worker thread since the task scheduler captured fp0
    // when initialized.
} );
// fp1 will be used here since TBB algorithms do not change floating-point settings which were set
// before calling.
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp1 will be here since the task group context with captured fp1 is specified for this
    // parallel algorithm.
}, ctx );
// fp1 will be used here.

I have already demonstrated one property of the task group context based approach in examples #7 and #8: it prevails over the task scheduler floating-point settings when the context is specified for an Intel TBB parallel algorithm. Another property is inherent in this approach: nested parallel algorithms inherit floating-point settings from a task group context specified for an outer parallel algorithm.

Example #9. Nested parallel algorithms.

// Suppose fp0 is used.
// The task scheduler will capture fp0.
task_scheduler_init tbb_scope;
tbb::task_group_context ctx;
set_fp_settings( fp1 );
ctx.capture_fp_settings();
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp1 will be here
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // Although the task group context is not specified for the nested parallel algorithm and
        // the task scheduler has captured fp0, fp1 will be here.
    }, ctx );
} );
// fp1 will be used here.

If you need to use the task scheduler floating-point settings inside a nested algorithm you may use an isolated task group context:

Example #10. A nested parallel algorithm with an isolated task group context.

// Suppose fp0 is used.
// The task scheduler will capture fp0.
task_scheduler_init tbb_scope;
tbb::task_group_context ctx;
set_fp_settings( fp1 );
ctx.capture_fp_settings();
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // fp1 will be used here.
    tbb::task_group_context ctx2( tbb::task_group_context::isolated );
    tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
        // ctx2 is an isolated task group context so it will have fp0 inherited from the task
        // scheduler. That’s why fp0 will be used here.
    }, ctx2 );
}, ctx );
// fp1 will be used here.

There is no doubt that it is impossible to demonstrate in one blog all the possibilities of the floating-point support functionality in Intel TBB. But these trivial examples demonstrate the basic ideas for floating-point settings management with Intel TBB and can be applied in real world application.

The main concepts of floating-point settings can be gathered into the following list:

  • Floating-point settings can be specified either for all Intel TBB parallel algorithms via a task scheduler or for separate Intel TBB parallel algorithms via a task group context;
  • Floating-point settings captured by a task group context prevail over the settings captured during task scheduler initialization;
  • By default all nested algorithms inherit floating-point settings from an outer level if neither a task group context with captured floating-point settings nor an isolated task group context is specified;
  • An invocation of an Intel TBB parallel algorithm does not visibly modify the floating-point settings of the calling thread, even if the algorithm is executed with different settings;
  • Floating-point settings that are set after task scheduler initialization are not visible for Intel TBB parallel algorithms if the task group context approach is not used or the task scheduler is not reinitialized;
  • The user code inside a task either should not change floating-point settings, or should restore the previous settings before the end of the task.

P.S. A deferred task scheduler captures floating-point settings when the initialize method is called.

Example #11: An explicit task scheduler initialization.

set_fp_settings( fp0 );
tbb::task_scheduler_init tbb_scope( task_scheduler_init::deferred );
set_fp_settings( fp1 );
init.initialize();
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    // The task scheduler is declared when fp0 is set but it will capture fp1 since it is
    // initialized when fp1 is set.
} );
// fp1 will used be here.

P.P.S. Be careful if you rely on the auto capture property of a task scheduler. It will fail if your functionality is called inside another Intel TBB parallel algorithm.

Example #12. One more warning: beware of library functions.

Code snippet 1. Slightly modified Example #1. It is a valid code and there are no issues.

set_fp_settings( fp0 );
// Run with the hope that Intel TBB parallel algorithm will create a default task scheduler which
// will also capture floating-point settings when initialized.
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {...} );

Code snippet 2.  Just call “code snip 1” like a library function.

set_fp_settings( fp1 );
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> & ) {
    call “code snippet 1”;
}
// Possibly, you will want to have fp1 here but see the second bullet below.

This looks like an innocuous example since “code snippet 1” will set the required floating-point settings and perform its calculation with fp0. But it turns out that this example has two issues:

  1. By the time “code snippet 1” is called, the task scheduler will already be initialized and will have captured fp1. Thus “code snippet 1” will perform its calculations with fp1 and ignore the fp0 setting;
  2. Isolation of user floating-point settings is broken since “code snippet 1” changes floating-point settings inside the Intel TBB task and does not restore the initial ones. That’s why there are no guarantees about floating-point settings after execution of Intel TBB parallel algorithm in “code snippet 2”.

Code snippet 3. Corrected solution.

Let’s fix “code snippet 1”:

// Capture the incoming fp settings.
get_fp_settings( fpx );
set_fp_settings( fp0 );
tbb::task_group_context ctx;
ctx.capture_fp_settings();
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> &r ) {
    // Here fp0 will be used for all iterations on any Intel TBB worker thread.
}, ctx );
// Restore fp settings captured before setting fp0.
set_fp_settings( fpx );

Code snippet 2 remains unchanged.

set_fp_settings( fp1 );
tbb::parallel_for( tbb::blocked_range<int>( 1, 10 ), []( const tbb::blocked_range<int> &r ) {
    call “fixed code snip 1”.
} );
// fp1 will be used here since the “fixed code snippet 1” does not change the floating-point
// settings visible to “code snippet 2”.

 

Using Unity* 3D Standard GUI with TouchScript Assets

$
0
0

By Lynn Thompson

Download PDF

The standard Unity* 3D graphical user interface (GUI) widgets respond to touch as if you had clicked them with a mouse using Windows* 8 Pen and Touch functionality, and you cannot at present configure them for multitouch with multiple gestures. The example in this article shows how to use the TouchScript Pan Gesture to enhance standard GUI objects. The resulting widgets provide the look and feel of the standard Unity 3D GUI widgets, and you can relocate them on the screen by dragging them (Pan Gesture). The example shows Unity 3D running on Windows 8, a proficient platform for providing users with GUI widgets they can customize in a flexible manner.

Creating the Example

The example begins by creating a few spheres and cubes in the view of the scene’s main camera. This scene is simple and contains only a few pieces of three-dimensional (3D) geometry that you can modify by manipulating standard Unity 3D GUI widgets. The main Unity 3D GUI widgets used are buttons, horizontal slider bars, and radio buttons. After each Unity 3D GUI widget has been configured, you then place a Unity 3D quad asset at the same location, with dimensions similar to the standard Unity 3D GUI widgets. You configure this quad with a TouchScript Pan Gesture that allows users to move it. The position of the quad asset feeds the location parameters of the standard Unity 3D GUI widget for which it has been configured and with which it shares a location. The result is a standard Unity 3D GUI widget that users can move around the screen through a TouchScript Pan Gesture. Figure 1 provides a screenshot of the example.

Three Unity* 3D standard GUI widgets
Figure 1. Three Unity* 3D standard GUI widgets

Add a Standard GUI Widget

Start by adding three GUI widgets in the OnGUI function. The first widget contains a panel and a few buttons to adjust the scale of the simple scene geometry. The second widget is a panel containing radio buttons for discretely changing the scale of the geometry in the scene. The third widget contains three horizontal slider bars that are tied to the scene geometry’s rotation around the x-, y-, and z-axes. Place these three widgets in the upper left, upper middle, and upper right areas of the screen. Their placement is based on a 1024x768 screen resolution.

Many resources are available to assist in configuring standard Unity 3D GUI widgets. The source code for the Unity 3D GUI widgets used in this example is provided in the accompanying Unity 3D project.

Configure TouchScript

This example adds Unity 3D quad primitives as the TouchScript target that facilitates the touch movement of the standard Unity 3D GUI widgets. You add these primitives programmatically, not in the Unity 3D editor. The code to add the quad used to move the upper left button GUI widget is as follows:

Public Class:
.
.
.
private GameObject buttonQuad;//Declare asset to receive gestures
//Declare variables to manipulate standard GUI position via gestures
private Vector3 buttonQuadStartPosition;
private float buttonQuadDeltaX;
private float buttonQuadDeltaY;
.
.
.
Start Function:
.
.
//Create asset to receive gestures
buttonQuad = GameObject.CreatePrimitive (PrimitiveType.Quad);

//Set the position vector of the asset receiving gestures to match the standard
//GUI position
buttonQuad.transform.position = new Vector3(-0.7f,2.25f,-10.0f);

//Add the TouchScript components to make the touch asset responsive to gestures
buttonQuad.AddComponent ("PanGesture");
buttonQuad.AddComponent ("PanScript");

//Initially position the touch gesture asset
buttonQuadStartPosition = buttonQuad.transform.position;
//Initialize the position change variables
buttonQuadDeltaX = 0.0f;
buttonQuadDeltaY = 0.0f;

//Make the touch gesture asset invisible
MeshRenderer buttonQuadRenderer = (MeshRenderer) buttonQuad.GetComponent ("MeshRenderer");
buttonQuadRenderer.enabled = false;
.
.
.
Update function:
.
.
//Set the position change variables. The 235 scaling factor is for a 1024x768
//screen resolution. In a production application this scaling factor would be
//set based on the resolution chosen by the application or user at launch.
buttonQuadDeltaX = -235*(buttonQuadStartPosition.x - 	buttonQuad.transform.localPosition.x);
buttonQuadDeltaY = 235*(buttonQuadStartPosition.y - buttonQuad.transform.localPosition.y);
.
.
OnGUI function:
.
.
////////////////////// Button Menu Begin ////////////////////////////////////
//Draw a standard GUI box at a position altered by the position of the asset
//receiving touch gestures
		GUI.Box(new Rect(10+buttonQuadDeltaX,10+buttonQuadDeltaY,240,160), "Button Menu");

//Draw a standard GUI button at a position altered by the position of the asset
//receiving touch gestures

		if(GUI.Button(new Rect(20+buttonQuadDeltaX,40+buttonQuadDeltaY,220,20), "Increase Scale (4x Maximum)"))
		{
			//While increasing the cube scale, limit the cube scaling
			//to be between 0.25 and 4

			if (scale < 4.0)
			{
				scale += 0.1f;
				cube01.transform.localScale += (new Vector3(0.1f,0.1f,0.1f));
				cube02.transform.localScale += (new Vector3(0.1f,0.1f,0.1f));
				sphere01.transform.localScale += (new Vector3(0.1f,0.1f,0.1f));
				sphere02.transform.localScale += (new Vector3(0.1f,0.1f,0.1f));

			}
			if (scale == 4.0f)
			{
				maxscale = true;
				minscale = false;
				defaultscale = false;
			}
			else
			{
				maxscale = false;
			}
			if (scale == 0.25f)
			{
				minscale = true;
				maxscale = false;
				defaultscale = false;
			}
			else
			{
				minscale = false;
			}
			if (scale == 1.0f)
			{
				defaultscale = true;
				maxscale = false;
				minscale = false;
			}
			else
			{
				defaultscale = false;
			}
		}

		if(GUI.Button(new Rect(20+buttonQuadDeltaX,80+buttonQuadDeltaY,220,20), "Decrease Scale (0.25x Minimum)"))
		{
			if (scale > 0.25)
			{
			//While decreasing the cube scale, limit the cube scaling
			//to be between 0.25 and 4

				scale -= 0.1f;

				cube01.transform.localScale -= (new Vector3(0.1f,0.1f,0.1f));
				cube02.transform.localScale -= (new Vector3(0.1f,0.1f,0.1f));
				sphere01.transform.localScale -= (new Vector3(0.1f,0.1f,0.1f));
				sphere02.transform.localScale -= (new Vector3(0.1f,0.1f,0.1f));
			}
			if (scale == 4.0f)
			{
				maxscale = true;
				minscale = false;
				defaultscale = false;
			}
			else
			{
				maxscale = false;
			}
			if (scale == 0.25f)
			{
				minscale = true;
				maxscale = false;
				defaultscale = false;
			}
			else
			{
				minscale = false;
			}
			if (scale == 1.0f)
			{
				defaultscale = true;
				maxscale = false;
				minscale = false;
			}
			else
			{
				defaultscale = false;
			}
		}

		//Create a button to exit the application
		if(GUI.Button(new Rect(20+buttonQuadDeltaX,120+buttonQuadDeltaY,220,20), "Exit Application"))
		{
			Application.Quit();
		}

		GUI.Label (new Rect(20,180,220,20),scale.ToString());
		////////////////////// Button Menu End//////////////////////////////////
.
.
.

The PanScript script programmatically added to the quads allows for the panning and drag-and-drop behavior of the quads. You can review this functionality in the accompanying example or the “Everything” example provided with the TouchScript package. The SGwTS.wmv video that accompanies this example shows the GUI widgets in use.

Example Enhancements

The most tedious step in creating this example is placing the quad behind the GUI widget. It’s a manual, trial-and-error process in which the x, y, and z values are changed until the quad is lined up with the standard Unity 3D GUI widget. This alignment is not valid if at application startup the user changes the screen resolution to something other than 1024x768. To enhance this example, you could use the Unity 3D GUI layout functionality followed by automatic placement of the affiliated TouchScript Pan Gesture touch target quad.

The quad used in the accompanying example was sized for the same width as the standard Unity 3D GUI widget and a larger height dimension. The result is “tabs” at the top and bottom of the widget that the user touches to drag the widget around the screen. You can adjust the position of the quad relative to the standard Unity 3D GUI widget to move the touch target point. For example, the quad could line up with the widget on the sides and bottom, moving the intended touch target to the top of the widget.

I say intended because in the accompanying example, the user can touch anywhere on the quad to move it and the affiliated widget, including the areas of the quad that occupy the same screen space as the widget. To limit the touch target points to those on the quad that extend beyond the screen space that both share, you can add a second “blocking” quad. This blocking quad would have the same dimensions as the standard Unity 3D GUI widget and be placed at the same location. The quad used to alter the position of the standard widget determines the position of this blocking quad. With this functionality in place, the blocking quad then uses the TouchScript Untouchable behavior. An example of this behavior is provided with the TouchScript Hit example. Implementing this functionality eliminates a flaw in the accompanying example where the slider widget moves when the horizontal slider bars are moved.

This method of using touch target quads and block quads can be extended to allow movement of user-configurable widgets. By stacking touch quads and block quads, users can move the GUI widget components independently of the GUI widget in its entirety. For example, a user can manipulate individual horizontal sliders independently of each other at the rectangle base. Similarly, you could use a custom Pan Gesture to limit the movement of the GUI widget component within the borders of the widget.

Touch Order

Remember that one of the GUI widgets you configured allows you to scale the geometry primitives in the scene. The left cube and right sphere have been configured with a Pan Gesture, and you can drag them around the scene as desired. The center cube and center sphere have not been configured with any TouchScript gestures and do not respond when touched. This behavior creates an issue, because if the scene geometry is scaled out sufficiently, the geometry engulfs the quads used to move the GUI widgets. If a user scales the geometry primitives to such an extreme and then drags a GUI widget on top of them, the next touch gesture that targets this area of the screen results in a touch gesture activating the geometry primitive, not the GUI widgets. Fortunately, you can use this touch gesture to move the geometry primitive from behind the GUI widget, allowing the widget to respond to touch gestures again.

If a user drags a GUI widget on top of one of the center primitives that has not been configured for touch gestures, the Widget will be stranded. You can resolve this issue by assigning the scene primitive to the TouchScript Untouchable Behavior by means of the Add Component button (see Figure 2). This configuration allows touch gestures to pass through the primitive to the GUI widgets that you have configured with a Pan Gesture. You can observe this behavior in the accompanying video, where a GUI widget dropped on the left cube or right sphere results in the next touch gesture moving the cube or sphere. A GUI widget dropped on the middle sphere (which has the Untouchable Behavior assigned) results in the next touch gesture moving the GUI widget. If the scale factor is large enough to make the middle cube engulf the unrendered quads and a GUI widget is dropped on the middle cube, the GUI widget is stranded.

You must use the TouchScript Untouchable Behavior in any scenario where you don’t want a scene asset to affect the GUI widget. For a first-person shooter (FPS) game using the GUI widgets described in this article, you would need to configure the TouchScript Untouchable Behavior for every scene asset with which the FPS’s main camera may come in contact.

Configuring a Unity* 3D scene asset to allow touch gestures
Figure 2.Configuring a Unity* 3D scene asset to allow touch gestures for objects behind the asset

Render Order

The visible component of the GUI widgets configured in this article are native Unity 3D GUI widgets constructed in the OnGUI function. As such, they will always be visible, and there is no need to configure a dedicated GUI widget camera to render a unique GUI widget layer.

Conclusion

The TouchScript package functions well when implemented in combination with the standard Unity 3D GUI widgets. You can use this combination in any number of ways to produce traditional GUI widgets that users can move around the screen through drag-and-drop-type gestures, saving the time and the effort necessary to create geometry for use with TouchScript gestures that result in a completely custom GUI widget. If you don’t require this level of customization, the method in this example provides a means of quickly generating GUI widgets that respond to drag-and-drop gestures and work smoothly and reliably on the Unity 3D–Windows 8 platform.

Related Content

About the Author

Lynn Thompson is an IT professional with more than 20 years of experience in business and industrial computing environments. His earliest experience is using CAD to modify and create control system drawings during a control system upgrade at a power utility. During this time, Lynn received his B.S. degree in Electrical Engineering from the University of Nebraska, Lincoln. He went on to work as a systems administrator at an IT integrator during the dot com boom. This work focused primarily on operating system, database, and application administration on a wide variety of platforms. After the dot com bust, he worked on a range of projects as an IT consultant for companies in the garment, oil and gas, and defense industries. Now, Lynn has come full circle and works as an engineer at a power utility. Lynn has since earned a Masters of Engineering degree with a concentration in Engineering Management, also from the University of Nebraska, Lincoln.

Intel, the Intel logo, and Ultrabook are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
Copyright © 2014. Intel Corporation. All rights reserved. 

Viewing all 461 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>