I’m finally back with another blog post. I have been really busy and have not had time to blog as much as I wanted to, but I have finally got around to get together a hopefully interesting post. Ever since I heard that Silverlight 3 had multi-touch support, I have wanted to try it out. Unfortunately I’m not wealthy enough to be able to run off and get a new laptop just because the one I have doesn’t support multi-touch. So I have had to wait, until a couple of days ago, when I finally got around to buying an iPod Touch. Using the iPod Touch’s multi-touch support to do Silverlight multi-touch development is actually not that hard, just a bit fiddly.
There are several different solutions on how to get this to work. Most of them have one thing in common though. Your computer runs some form of software that connects with your chosen input device via WiFi and forwards the commands to the computer. One of the more common and talked about solutions is a project called MIRIA. MIRIA is a framework to help out when doing multi-touch development. It helps out by adding a bit of gesture support and a few other things. It even makes it possible to try multi-touch using Silverlight 2. It does this by using a client application that collects the touch messages and forwards them to Silverlight using sockets.
However, I have decided to go in another direction. I wanted to start by trying out Silverlight’s native multi-touch features. The native support is a bit limited, but still powerful even if you need to do a lot more work on your own. But let’s start off by getting multi-touch support by using an iPod Touch or iPhone…
Start by downloading the client tools needed on your computer. One of the tools available for this is available on Codeplex and is called Multi-Touch Vista. Even though it is called Multi-Touch Vista, it is made for Windows 7. Yeah…there is that little caveat, you need to run Windows 7 for multi-touch to work. But you should really being doing that anyway… When you have downloaded the zip from Codeplex, you will find that it is full of files and very little information. But this is what I did to get it to work. Start by opening the Driver folder and run the Install Driver.cmd to install a multi-touch driver on your machine. This is the first out of 3 parts that needs to be done to get it to work. The next two step involves the Multitouch.Driver.Service.exe and Multitouch.Service.exe. These to services are responsible for getting it to work. If you want to run them as services, you need to register them as such. That is done fairly simple by running the following command in the Visual Studio 2008 Command Prompt
installutil C:\....\Multitouch.Driver.Service.exe
installutil C:\....\Multitouch.Service.exe
These will be installed with the setting to start up manually. So you need to start them up when you want them. If you don’t want to run them as services, you can run their corresponding xxx.Console.exe as standalone applications. This will run the same functionality as a console application instead of as a service.
Personally I have had some problems with the Multitouch.Driver.Service.exe as a service, so I run that as a console app. And also remember the following, the Multitouch.Service.exe must run before you start the Multitouch.Driver.Service.exe.
The next step is to run the Multitouch.Configuration.WPF.exe. This is used to configure the services. There are two modes to run the Multi-Touch Vista features. MultipleMice makes it possible to use multiple mice to simulate multi-touch, while the Tuio will get touch information from a 3rd party input. This 3rd party input could be anything from an iPod or iPhone to a specially configured webcam. To be perfectly honest I have not tried the cam solution, but there are some cool ideas for home built touch tables and other touch enabled things available here.
Anyhow, you want to select the Tuio mode and press the big blue arrow covered button to activate it. After that, with both services running and the Tuio mode set up, it is time to get the iPod ready. There are several different pieces of software available to get the iPod to send Tuio commands to your computer. I did however choose to not jailbreak my iPod and therefore bought one from the Apple App store. The chosen one is called OSCemote (apparently pronounced “Ah-ski-mote”) and costs $6.50 or something. After some issues with it not installing correctly, I finally got it working by re-installing it completely.
As you open the application, you get a black screen with some blue squares on it. At the bottom, there are some icons available as well. Press the “More” icon to get to the settings panel. In there, you need to configure your computers IP-address and a port. The port number seems to be 3333 for the Multi-Touch Vista drivers. Hopefully, if all goes well, your iPod should now be connected and if you click the “Multi Touch” icon you should now be sending multi-touch commands to your machine. At least that is the idea…I won’t say that it is always that stable, but I guess that is to be expected from something like this.
The MIRIA project also handles the gyro information if you would like to use that. It also seems possible to interface with a Wii-mote using MIRIA. However, if you want to use those features of the MIRIA project with Silverlight, you will have to have the MIRIA MIG (Multi Input Gateway or something) installed instead of Multi-Touch Vista. And your code will be dependant on this and will not actually get multi-touch from the OS, but from a socket connection. Anyhow…it could be very interesting to play around with…
After all of this is now connected and hopefully working, I guess it is time to start coding against it. The method for working with touch events in Silverlight is very crude. There is no cool gesture support or anything. We are talking a single static class with 1 event. The class in question is called simply Touch. It has a single event called FrameReported that gives you access to an instance of the TouchFrameEventArgs class. This eventargs gives you access to the current points on the screen relative to some UIElement, the primary touch point, a timestamp and a possibility to suspend mouse events.
That’s it…that’s the revolutionary multi-touch support in Silverlight. So, what information can we get from the actual touch points? Well, we can of course get its Position, its Action, its Size and its TouchDevice. The Action is an enum that contains 3 values, Down, Move and Up, and the TouchDevice mainly gives you access to an id that makes it possible to identify the specific touch point when the next event is raised. So with this basic functionality, we have to get very creative on our own. I’ve built a tiny little test application to verify that everything works, and to show the basics of touch development in Silverlight.
I started out with a regular Silverlight application in VS2008, with an accompanying web application project to host it. After that I decided how I wanted to implement it, and decided to do my regular MVVM project, with the viewmodel handling all of the multi-touch functionality and exposing the touch points as an ObservableCollection. The viewmodel, called TouchPointHandler is really not that complex. It is a standard C# class with a constructor that takes a single parameter, a FrameworkElement. The FrameworkElement will be used as the reference point for the touch point locations. Other than that it exposes a single property called Points.
public class TouchPointHandler
{
FrameworkElement _relativeTo;
ObservableCollection<InputPoint> _points = new ObservableCollection<InputPoint>();
public TouchPointHandler(FrameworkElement relativeTo)
{
_relativeTo = relativeTo;
}
public ObservableCollection<InputPoint> Points
{
get { return _points; }
}
}
So far there are no news at all. It introduces a new class called InputPoint but other than that there is nothing going on… So let’s get started with the touchy stuff. In the constructor, I hook up a handler to the Touch class’ FrameReported event. The handler gets the TouchPoints for the event and loops through them. Checking each TouchPoint’s Action. If the Action is Down, I add a new InputPoint instance to the Points collection. If it is Move, I update the corresponding InputPoint. And finally if it is Up, I remove the point from the Points collection.
public TouchPointHandler(FrameworkElement relativeTo)
{
_relativeTo = relativeTo;
Touch.FrameReported += new TouchFrameEventHandler(Touch_FrameReported);
}
void Touch_FrameReported(object sender, TouchFrameEventArgs e)
{
TouchPointCollection points = e.GetTouchPoints(_relativeTo);
foreach (var point in points)
{
switch (point.Action)
{
case TouchAction.Down:
Points.Add(new InputPoint(point));
break;
case TouchAction.Move:
Points.Single(tp => tp.DeviceID == point.TouchDevice.Id).Update(point.Position);
break;
case TouchAction.Up:
Points.Remove(Points.Single(tp => tp.DeviceID == point.TouchDevice.Id));
break;
}
}
}
So…all the magic is in the InputPoint class…well…no…not really. That class is not very complicated either. It is a standard C# class implementing the INotifyPropertyChanged interface. It exposes 2 properties, the current position as a Point called Position and the points Id as an integer called DeviceID. The properties are both using the short hand syntax, and are set by the constructor as well as the class’ only public method, Update(). Very simple…it looks like this
public class InputPoint : INotifyPropertyChanged
{
public InputPoint(TouchPoint tp)
{
Position = tp.Position;
DeviceID = tp.TouchDevice.Id;
}
public void Update(Point p)
{
Position = p;
OnPropertyChanged("Position");
}
public int DeviceID { get;set; }
public Point Position { get; set; }
protected virtual void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}
public event PropertyChangedEventHandler PropertyChanged;
}
Next I started looking at the view. First off, I started by hooking up a new TouchPointHandler and setting it as the DataContext.
public partial class MainPage : UserControl
{
public MainPage()
{
InitializeComponent();
TouchPointHandler tph = new TouchPointHandler(this.MyItemsControl);
this.DataContext = tph;
}
}
The Xaml was going to be really simple. An ItemsControl bound to the Points collection. The control is placed in the top left corner of the application. The ItemTemplate is a simple thing. It contains a Grid, 50x50 pixels, containing a red Ellipse. The Ellipse is translated 25 pixels to the left and up, placing its center at the top left corner of the Grid. Thus placing at the Grid’s “placement point”.
<Grid x:Name="LayoutRoot">
<ItemsControl ItemsSource="{Binding Points}" x:Name="MyItemsControl"
VerticalAlignment="Top" HorizontalAlignment="Left">
<ItemsControl.ItemTemplate>
<DataTemplate>
<Grid Width="50" Height="50" VerticalAlignment="Top" HorizontalAlignment="Left">
<Ellipse Fill="Red">
<Ellipse.RenderTransform>
<TranslateTransform X="-25" Y="-25" />
</Ellipse.RenderTransform>
</Ellipse>
</Grid>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
</Grid>
The next step would of course be to get the actual point positions into the equation. A simple, databound TranslateTransform in the items Grid should do the trick. And it probably would…if Transforms inherited from FrameworkElement and not DependencyObject. Databinding can only be applied to objects inheriting from FrameworkElement…so that didn’t work… What’s next? Well…there are two simple ways of doing it as I see it. A Behavior, which cannot be databound either, and thus would have to contain the touch logic as well…or an attached property. I like attached properties and have used them a lot before, so I went with that. I would like to point out however, that bundling rotate and scale functionality into a Behavior would be a nice way of doing it as well…
So, I created a static class called TranslateTransformExtension. It contains 2 attached properties called X and Y. There is nothing odd in this class at all. It just adds attached properties like any other attached properties. That means that it needs 2 DependencyProperty objects, getters and setters for them and a callback for when they are changed.
public static class TranslateTransformExtension
{
public static DependencyProperty XProperty =
DependencyProperty.RegisterAttached("X", typeof(double), typeof(TranslateTransformExtension),
new PropertyMetadata(0.0, new PropertyChangedCallback(OnXChanged)));
public static DependencyProperty YProperty =
DependencyProperty.RegisterAttached("Y", typeof(double), typeof(TranslateTransformExtension),
new PropertyMetadata(0.0, new PropertyChangedCallback(OnYChanged)));
public static double GetX(DependencyObject obj)
{
return (double)obj.GetValue(XProperty);
}
public static double GetY(DependencyObject obj)
{
return (double)obj.GetValue(YProperty);
}
public static void SetX(DependencyObject obj, double x)
{
obj.SetValue(XProperty, x);
}
public static void SetY(DependencyObject obj, double y)
{
obj.SetValue(YProperty, y);
}
private static void OnXChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e)
{
UIElement element = sender as UIElement;
if (element == null)
return;
TranslateTransform transform = GetTranslateTransform(element);
transform.X = (double)e.NewValue;
}
private static void OnYChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e)
{
UIElement element = sender as UIElement;
if (element == null)
return;
TranslateTransform transform = GetTranslateTransform(element);
transform.Y = (double)e.NewValue;
}
}
As you might have noticed I left out one method in the previous snippet. The GetTranslateTransform method. That method however, became a little complicated. Not complex and complicated as in hard to understand or write, just tedious based on a few different scenarios. Any object that gets the properties attached might or might not already have transforms applied. If there is a TranslateTransform, that’s great, I can use that. If there is no transform applied, I can just add a new TranslateTransform. If there is TransformGroup, I can either use an already existing TranslateTransform if one exists, or add a new one. And finally, if there is another type of transform, it needs to be moved into a TransformGroup and combined with a TranslateTransform. However I do it, I return a TranslateTransform to use when setting its location based on the input point.
private static TranslateTransform GetTranslateTransform(UIElement element)
{
TranslateTransform transform = null;
if (element.RenderTransform == null)
{
transform = new TranslateTransform();
element.RenderTransform = transform;
}
else if (element.RenderTransform is TransformGroup)
{
TransformGroup tg = (TransformGroup)element.RenderTransform;
foreach (Transform t in tg.Children)
{
if (t is TranslateTransform)
{
transform = (TranslateTransform)t;
break;
}
}
if (transform == null)
{
transform = new TranslateTransform();
tg.Children.Add(transform);
}
}
else if (element.RenderTransform is TranslateTransform)
{
transform = (TranslateTransform)element.RenderTransform;
}
else
{
TransformGroup tg = new TransformGroup();
Transform t = element.RenderTransform;
tg.Children.Add(t);
transform = new TranslateTransform();
tg.Children.Add(transform);
element.RenderTransform = tg;
}
return transform;
}
In my case, there won’t be a transform, so it will just add a new one, but I wanted to see how to handle the different scenarios. From a performance perspective I would actually recommend something else. This method will be called a lot. And I mean A LOT. So it needs to be efficient, and this might not be the most efficient way. In a real application, I would recommend adding 2 more “private” attached properties, that the TranslateTransformExtension class can use to keep track of the elements corresponding TranslateTransform object. And just do this logic the first time and then storing it in one of the “extra” attached properties. But why 2? Well, if there is already a TranslateTransform applied, it would be sweet to store its initial transform values and apply those on top of the touch values, making the move correct even if a TranslateTransform is already applied. The existing X and Y values could be stored in a Point in an extra attached property…
But for simplicity I kept it like this. And also because the project took me all of 10 minutes to create, and I didn’t feel like refactoring something this simple…sorry!
Anyhow…that is it for this time! A fully working Silverlight 3 multi-touch application. Maybe a little late, considering that Silverlight 4 beta was unveiled today at Microsoft PDC. More about PDC later… :)
If you want to have a look at the code and don’t feel like copying my snippets, it is available here: MultiTouchDemo.zip (73.40 kb)