Silverlight 3 multi-touch development 101 – pt 3 - Rotation

Welcome back for the 3rd part of the multi-touch 101 series. If you haven’t already read the previous parts I suggest that you do so as the code in the current part will build on top of the code from the previous parts. Part 1 is available here and part 2 here.

This part will cover the last type of gesture that the demo application uses – rotation. For me, the rotation is by far the most complicated thing to implement. Not because the code is very complicated, but because the math behind it isn’t that simple. Well…the math is actually not that complicated, but most of the developers I know are not big fans of trigonometry and thus couldn’t really help me out when I had questions. I even called up my dad who used to be a math teacher. But when I explained what I was trying to do, he just shook his head and wondered what I was talking about…But let’s get started.

In the previous parts, I have been working on the TouchManager class that helps out with the “gesture support”. I will keep on extending this class to support rotation. I will start by extending the HandleMovement method that was used in both part 1 and 2. When I talked about zoom in part 2, I added code to get hold of the relative position of touch point 1 in relation to touch point 2. I got this relative position for both before and after the last movement, and stored these positions in 2 variables called lastRelPos and newRelPos. These will be used to figure out the rotation as well.

I star of by getting the angle of these positions in a scale from –180 to 180 instead of 0 to 360. That makes it easier for me to figure out the change between them. If I go for 0-360 scale, it causes some problems when going past 360 degrees or going below 0. So 180 to –180 makes that a little easier. I do this by subtracting 360 if the angle is greater than 180… Next, I get the rotation caused by the last movement by subtracting the previous angle from the new angle.

 

if (lastRelPos.Angle > 180.0)
lastRelPos.Angle -= 360;

if (newRelPos.Angle > 180.0)
newRelPos.Angle -= 360;

double rotation = newRelPos.Angle - lastRelPos.Angle;

Now I know how much rotation the last movement has caused. After this, I make sure that the angle is within my famous -180 to 180 degrees. After that is done I finish off the rotation part of the TouchManager by raising a Rotate event by calling a corresponding OnRotate method. I won’t show the code for the event as it the same as any other event…

if (rotation > 180)
rotation -= 360;
else if (rotation < -180)
rotation += 360;

OnRotate(rotation);

Well, that wasn’t so hard, was it? No…unfortunately I’m not done. Figuring out the rotation caused by the last movement is  quite trivial. The hard thing comes when I need to handle that information.

The logic behind what I want to do is sort of hard to put into word. That’s why I have decided to first try to explain it, and then add a drawing.

Since the center of rotation is right between the two fingers, it can move around and change over and over again as the user lifts the fingers and reposition them. The problem this causes is just like with the scaling, I can’t just move the transform’s origin. If you do that, it will work for the first rotation, but a soon as the point of rotation is changed, the element will jump around. Just imagine that you rotate an square element 45 degrees around the center of the element. Then you move the center of rotation to the bottom right corner. That will cause the 45 degree rotation to rotate around that point instead of around the center. This will cause a visual transition that makes the square jump around.

So instead, the rotation will always be done in relation to the global top left corner. To make it look as if the rotation is around the point between the users fingers, I will add a translation. When rotating from the top left corner, it will cause the element to move in a circular motion around the top left corner. This movement can be calculated by using the fact that I know the radius of the circle (the distance between the rotation point and the top left corner) and the angle. If I figure out the movement caused by the rotation and counter move the parent element, it will look as if it has actually rotated around the correct place… Makes sense? Well…have a look at this

Rotation

So, to make it look ok, there will have to be some math involved…and I do hate math. I know that we developers use math all the time, but it is a different thing when it is “actual” math…

So let’s see how I handle this whole thing… My TouchDemoView attaches a handler to the Rotate event in the constructor. This handler in turn starts off by figuring out what point it is supposed to rotate around. This is, as I mentioned before, a point right between the two touch points.

void OnRotation(object sender, TouchEventArgs<double> e)
{
Point rotationPoint = TrigUtility.GetCenterOfPoints(e.LocalPoints[0], e.LocalPoints[1]);
...
}

Next, I need to figure out where that point is in relation to the “real” point of rotation. Basically the relative position of that point in relation to the “real” point. In my case, the real point happens to be at X=0, Y=0. So to get the relative position, I just have to use the GetRelativePosition method and pass in the the point I want to rotate around and “new Point(0,0)”. The relative position I get back, contains the angle between that point and the top left corner. To that angle, I add the current rotation to get the “real angle” between the element I’m rotating and the global top left corner. I then get the new angle by adding the angle passed from the TouchManager and set the rotation to that.

...
RelativePosition relPos = TrigUtility.GetRelativePosition(rotationPoint, new Point(0, 0));
relPos.Angle += Rotation.Angle;

double newAngle = relPos.Angle + e.Payload;
Rotation.Angle += e.Payload;
...

Next it is time to figure out what counter translation I need to make to get the rotation point looking correct. That is, I need the X and Y from the image above. To do this, I use and overload that I have created for the GetRelativePosition method. This overload takes a distance and an angle and returns a RelativePosition based on that. But let’s leave that method for a minute while I finish off the event handler I am working on.

By calling this, so far unknown, method twice. Once with the rotation points distance and current angle and once with the new angle I get the before and after points. As soon as I’ve got those, it is fairly simple to just subtract the two points X and Y value. However, since I want to “counter” move the element, I subtract the new point’s value from the old point’s.

I then simple add those values to the current Translation.X and Translation.Y…

Point originalPoint = TrigUtility.GetRelativePosition(relPos.Distance, relPos.Angle).Position;
Point newPoint = TrigUtility.GetRelativePosition(relPos.Distance, newAngle).Position;

Point t = new Point(originalPoint.X - newPoint.X, originalPoint.Y - newPoint.Y);

Translation.X += t.X;
Translation.Y += t.Y;

There is actually one more change that I need to make before the application will work, but let’s first have a quick look at the before mentioned overload to the GetRelativePosition  method.

It starts off by creating anew RelativePosition object that will be returned at the end of the method. It then sets that objects Distance and Angle properties as these are already known from the parameters passed to the method… But before setting the Angle property, it makes sure that it is within the correct bounds (0 – 360 degrees).

public static RelativePosition GetRelativePosition(double distance, double angle)
{
RelativePosition pos = new RelativePosition();
pos.Distance = distance;
while (angle > 360)
angle -= 360;
pos.Angle = angle;
...
}

 

Next, it converts the angle to radians, since that is what’s used when doing trig calculations in C#. And if the angle is larger than 90 degrees, I have to make sure to get it within 90 degrees as that is required by the Cos and Sin functions.

...
double radAngle = ToRadian(angle);
if (angle > 270)
radAngle = ToRadian(angle - 270);
else if (angle > 180)
radAngle = ToRadian(angle - 180);
else if (angle > 90)
radAngle = ToRadian(angle - 90);
...

 

Next I get the relative X and Y. X is fairly simple, it is just a matter of using the Math object’s Sin method and multiply the result with the distance. The Y however is a bit trickier. Since it is a relative position, the Y will actually be the negative value of Cos(angle) multiplied by the distance.

...
double y = -Math.Cos(radAngle) * distance;
double x = Math.Sin(radAngle) * distance;
...

 

Finally there is a bunch of if/else clauses to compensate for the fact that the angle might not have been within the 0-90 degree span. And finally the return value gets its Position set before being returned to the caller

Point p = new Point();
if (angle > 270)
{
p.Y = -x;
p.X = y;
}
else if (angle > 180)
{
p.Y = -y;
p.X = -x;
}
else if (angle > 90)
{
p.Y = x;
p.X = -y;
}
else
{
p.X = x;
p.Y = y;
}
pos.Position = p;
return pos;

That’s it. But I did mention one more changed having to be made before it starts working properly. Well, my previous part about Zoom actually contained a bit of bogus code to make it work while waiting for the rotation pieces to be implemented… It implemented the OnZoom as follows

void OnZoom(object sender, TouchEventArgs<double> e)
{
Point zoomPoint = TrigUtility.GetCenterOfPoints(e.LocalPoints[0], e.LocalPoints[1]);

double relativeZoom = e.Payload - 1;
Point zoomTranslation = new Point(-zoomPoint.X * relativeZoom, -zoomPoint.Y * relativeZoom);
RelativePosition postZoomPosition = TrigUtility.GetRelativePosition(zoomTranslation, new Point(0.0,0.0));

Scaling.ScaleX *= e.Payload;
Scaling.ScaleY *= e.Payload;
Translation.X += zoomTranslation.X;
Translation.Y += zoomTranslation.Y;
}

 

However, that code only works as long as the whole thing isn’t rotating around. This code calculates the amount of counter movement needed for the zoom to make it look as if it is make in the right place. Unfortunately, that movement will be fudged as soon as the rotation is taken into account. If I for example rotate the whole thing 90 degrees, my calculations in the previous code will actually cause the element to move up and down instead of right and left and vice versa. So, the rotation must obviously be taken into account…

So you can basically just skip the code after the row that gets the postZoomPosition. After the postZoomPosition has been acquired, the angle to be used for my compensation calculations will be the angle in the postZoomPosition added to the current rotation. This angle is then used to get a new RelativePosition. This point equals the movement we need for the compensation. But since we are zooming, we need to subtract the Translation multiplied by the relative zoom.

So the OnZoom implementation turns into this

void OnZoom(object sender, TouchEventArgs<double> e)
{
Point zoomPoint = TrigUtility.GetCenterOfPoints(e.LocalPoints[0], e.LocalPoints[1]);

double relativeZoom = e.Payload - 1;
Point zoomTranslation = new Point(-zoomPoint.X * relativeZoom, -zoomPoint.Y * relativeZoom);
RelativePosition postZoomPosition = TrigUtility.GetRelativePosition(zoomTranslation, new Point(0.0,0.0));

double compAngle = postZoomPosition.Angle + Rotation.Angle;
RelativePosition compZoomPosition = TrigUtility.GetRelativePosition(postZoomPosition.Distance, compAngle);

Scaling.ScaleX *= e.Payload;
Scaling.ScaleY *= e.Payload;
Translation.X += compZoomPosition.Position.X - Translation.X * relativeZoom;
Translation.Y += compZoomPosition.Position.Y - Translation.Y * relativeZoom;
}

 

That’s REALLY it! This code should make the application work nicely. It is actually funny how this simple application hogs attention. The “physical” human/computer interaction is really just so different to the “normal” interaction. Having something digital on the screen move as your fingers “touch” it is just very cool…even if this application is very simple. But I guess we need to get used to this as NUIs (Natural User Interface) are getting more and more common and will just keep coming. Hopefully the pre-built support and APIs will make it easier than this…

So, now you have no reason to not go ahead and build some cool multi-touch applications! Except possibly for the fact that you might not have the luxury of a multi-touch device. But you will in the close future! I promise!

Cheers!

Here is the code for download. I’m sorry that the code is in a class library project with some odd namespace names. But it is a part of a bigger project. It will at least keep you from having to type the code own your own or having to copy/paste my snippets from the blog…

 

[UPDATED]
I am apparently a completely useless mathematician, which I actually already knew. I apparently also suck at testing my software before putting it on my blog. There was an error in the TrigUtiltity which caused the application to work less than perfect if we put it like that. And since nobody has complained, I assume that nobody has actually downloaded the code. Or everyone that has, has found the error and fixed it without telling me. Anyhow, this post is now updated and the download is fixed… DarksideCookie.TouchDemo.zip (135.45 kb)

Comments (2) -

Hi Chris
Thanks for sharing the coding work. I'll put a link on my site http://iqrite.wordpress.com back to here as I am sure others will be interested in your work.

You may be interested to know that we are launching our new application that allows 10 point multi touch on any Windows platform. We will start selling towards the end of Feb 2010 and just in the process of organising all the pre-sale aspects.

Part of the drive to develop and launch our new application is to provide a coding environment for users of our interactive multi touch kit set frames.

A significant advantage for programmers is the low learning curve and no need to interpret complex gesture interpretations. Element has all the gestures you need including Pinch, Zoom, Rotate, Double Tap, Single Stoke Flick, Tap, Move. This allows
programmers to focus on the development of Multi-touch application in minutes without worrying about the nuts and bolts of touch handling.

The architecture of Element is mainly divided into two sets of class, each of them deals with a particular domain.  

Element.Bonfire (in short Bonfire) is a Windows Services that runs in the background, and starts with the Operating System.
and soves the problem of touch point distributions. Element.Sparks.Wpf (in short Sparks) eases the complexity of Multi-touch application development in .NET 3.5.  It provides developers a set of highly extensible classes that can help in their development of Multi-touch applications. Even though Sparks is part of the Element framework, it can function equally well without Bonfire. In order words, you can develop applications using Sparks alone on Windows 7 based machines.

Classes in Sparks are mainly divided into two groups.  The first group provides logical functions, such as gesture recognitions, while the second group deals with Wpf specifics implementations.

RG
Kevin Andreassend
PS How can you like the wind of Wellington Smile
htt://www.iqrite.com

Thanks for the comment! Cool to see that somebody is working on a developer friendly framework. I look forward to the day that we get some nice gesture frameworks in Silverlight. Right now everything is very manual...
BTW, liking the wind in Wellington is not that hard if you are a kitesurfer... Smile

Pingbacks and trackbacks (1)+

Comments are closed