Tuesday, 19 February 2013

Touch Events | iPhone Apps Tutorial pdf

Touch Events

Touch events in iPhone OS are based on a Multi-Touch model. Instead of using a mouse and a keyboard, users touch the screen of the device to manipulate objects, enter data, and otherwise convey their intentions.
iPhone OS recognizes one or more fingers touching the screen as part of a Multi-Touch sequence. This sequence begins when the first finger touches down on the screen and ends when the last finger is lifted from the screen. iPhone OS tracks fingers touching the screen throughout a multi-touch sequence and records the characteristics of each of them, including the location of the finger on the screen and the time the touch occurred. Applications often recognize certain combinations of touches as gestures and respond to them in ways that are intuitive to users, such as zooming in on content in response to a pinching gesture and scrolling through content in response to a flicking gesture.

Note: A finger on the screen affords a much different level of precision than a mouse pointer. When a user touches the screen, the area of contact is actually elliptical and tends to be offset below the point where the user thinks he or she touched. This “contact patch” also varies in size and shape based on which finger is touching the screen, the size of the finger, the pressure of the finger on the screen, the orientation of the finger, and other factors. The underlying Multi-Touch system analyzes all of this information for you and computes a single touch point.
Many classes in UIKit handle multi-touch events in ways that are distinctive to objects of the class. This is especially true of subclasses of UIControl, such as UIButton and UISlider. Objects of these subclasses known as control objects are receptive to certain types of gestures, such as a tap or a drag in a certain direction; when properly configured, they send an action message to a target object when that gesture occurs. Other UIKit classes handle gestures in other contexts; for example, UIScrollView provides scrolling behavior for table views, text views, and other views with large content areas.
Some applications may not need to handle events directly; instead, they can rely on the classes of UIKit for
that behavior. However, if you create a custom subclass of UIView a common pattern in iPhone OS development and if you want that view to respond to certain touch events, you need to implement the code required to handle those events. Moreover, if you want a UIKit object to respond to events differently, you have to create a subclass of that framework class and override the appropriate event-handling methods.

=> Events and Touches

In iPhone OS, a touch is the presence or movement of a finger on the screen that is part of a unique multi-touch sequence. For example, a pinch-close gesture has two touches: two fingers on the screen moving toward each other from opposite directions. There are simple single-finger gestures, such as a tap, or a double-tap, a drag, or a flick (where the user quickly swipes a finger across the screen). An application might recognize even more complicated gestures; for example, an application might have a custom control in the shape of a dial that users “turn” with multiple fingers to fine-tune some variable.
A UIEvent object of type UIEventTypeTouches represents a touch event. The system continually sends these touch-event objects (or simply, touch events) to an application as fingers touch the screen and move across its surface. The event provides a snapshot of all touches during a multi-touch sequence, most importantly the touches that are new or have changed for a particular view. As depicted in Figure below, a multi-touch sequence begins when a finger first touches the screen. Other fingers may subsequently touch the screen, and all fingers may move across the screen. The sequence ends when the last of these fingers is lifted from the screen. An application receives event objects during each phase of any touch.
                            Figure: A multi-touch sequence and touch phases


Touches, which are represented by UITouch objects, have both temporal and spatial aspects. The temporal aspect, called a phase, indicates when a touch has just begun, whether it is moving or stationary, and when it ends that is, when the finger is lifted from the screen.

The spatial aspect of touches concerns their association with the object in which they occur as well as their location in it. When a finger touches the screen, the touch is associated with the underlying window and view and maintains that association throughout the life of the event. If multiple touches arrive at once, they are treated together only if they are associated with the same view. Likewise, if two touches arrive in quick succession, they are treated as a multiple tap only if they are associated with the same view. A touch object stores the current location and previous location (if any) of the touch in its view or window.
An event object contains all touch objects for the current multi-touch sequence and can provide touch objects specific to a view or window. A touch object is persistent for a given finger during a sequence, and UIKit mutates it as it tracks the finger throughout it. The touch attributes that change are the phase of the touch, its location in a view, its previous location, and its timestamp. Event-handling code may evaluate these attributes to determine how to respond to a touch event.
Figure: Relationship of a UIEvent object and its UITouch objects


Because the system can cancel a multi-touch sequence at any time, an event-handling application must be prepared to respond appropriately. Cancellations can occur as a result of overriding system events, such as an incoming phone call.

=> Handling Multi-Touch Events

To handle multi-touch events, you must first create a subclass of a responder class. This subclass could be
any one of the following:
■ A custom view (subclass of UIView)
■ A subclass of UIViewController or one of its UIKit subclasses
■ A subclass of a UIKit view or control class, such as UIImageView or UISlider
■ A subclass of UIApplication or UIWindow (although this would be rare)
A view controller typically receives, via the responder chain, touch events initially sent to its view.
For instances of your subclass to receive multi-touch events, your subclass must implement one or more the UIResponder methods for touch-event handling, described below. in addition, the view must be visible (neither transparent or hidden) and must have its userInteractionEnabled property set to YES, which is the default.
The following sections describe the touch-event handling methods, describe approaches for handling common gestures, show an example of a responder object that handles a complex sequence of multi-touch events, discuss event forwarding, and suggest some techniques for event handling.

=> The Event-Handling Methods

During a multi-touch sequence, the application dispatches a series of event messages to the target responder.
To receive and handle these messages, the class of a responder object must implement at least one of the following methods declared by UIResponder, and, in some cases, all of these methods:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event

The application sends these messages when there are new or changed touches for a given touch phase:
■ It sends the touchesBegan:withEvent: message when one or more fingers touch down on the screen.
■ It sends the touchesMoved:withEvent: message when one or more fingers move.
■ It sends the touchesEnded:withEvent: message when one or more fingers lift up from the screen.
■ It sends the touchesCancelled:withEvent: message when the touch sequence is cancelled by a system event, such as an incoming phone call.
Each of these methods is associated with a touch phase; for example, touchesBegan:withEvent: is associated with UITouchPhaseBegan. You can get the phase of any UITouch object by evaluating its phase property.
Each message that invokes an event-handling method passes in two parameters. The first is a set of UITouch objects that represent new or changed touches for the given phase. The second parameter is a UIEvent object representing this particular event. From the event object you can get all touch objects for the event or a subset of those touch objects filtered for specific views or windows. Some of these touch objects represent touches that have not changed since the previous event message or that have changed but are in different phases.

=> Basics of Touch-Event Handling

You frequently handle an event for a given phase by getting one or more of the UITouch objects in the passed-in set, evaluating their properties or getting their locations, and proceeding accordingly. The objects in the set represent those touches that are new or have changed for the phase represented by the implemented event-handling method. If any of the touch objects will do, you can send the NSSet object an anyObject message; this is the case when the view receives only the first touch in a multi-touch sequence (that is, the multipleTouchEnabled property is set to NO).
An important UITouch method is locationInView:, which, if passed a parameter of self, yields the location of the touch in the coordinate system of the receiving view. A parallel method tells you the previous location of the touch (previousLocationInView:. Properties of the UITouch instance tell you how many taps have been made (tapCount), when the touch was created or last mutated (timestamp), and what phase it is in (phase).

If for some reason you are interested in touches in the current multi-touch sequence that have not changed since the last phase or that are in a phase other than the ones in the passed-in set, you can request them from the passed-in UIEvent object. The diagram in below Figure depicts a UIEvent object that contains four touch objects. To get all these touch objects, you would invoke the allTouches on the event object.
                                              Figure: All touches for a given touch event
If on the other hand you are interested in only those touches associated with a specific window (Window A
in below Figure ), you would send the UIEvent object a touchesForWindow: message.
                                             Figure: All touches belonging to a specific window
If you want to get the touches associated with a specific view, you would call touchesForView: on the
event object, passing in the view object (View A show in below Figure).
                                        Figure:  All touches belonging to a specific view
If a responder creates persistent objects while handling events during a multi-touch sequence, it should implement touchesCancelled:withEvent: to dispose of those objects when the system cancels the sequence. Cancellation often occurs when an external event for example, an incoming phone call disrupts the current application’s event processing. Note that a responder object should also dispose of those same objects when it receives the last touchesEnded:withEvent: message for a multi-touch sequence.

=> Handling Tap Gestures

A very common gesture in iPhone applications is the tap: the user taps an object on the screen with his or her finger. A responder object can handle a single tap in one way, a double-tap in another, and possibly a triple-tap in yet another way. To determine the number of times the user tapped a responder object, you get the value of the tapCount property of a UITouch object.
The best places to find this value are the methods touchesBegan:withEvent: and touchesEnded:withEvent:. In many cases, the latter method is preferred because it corresponds to the touch phase in which the user lifts a finger from a tap. By looking for the tap count in the touch-up phase (UITouchPhaseEnded), you ensure that the finger is really tapping and not, for instance, touching down and then dragging.
Program shows how to determine whether a double-tap occurred in one of your views.
Detecting a double-tap gesture
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
for (UITouch *touch in touches) {
if (touch.tapCount >= 2) {
[self.superview bringSubviewToFront:self];
}
}
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
}
A complication arises when a responder object wants to handle a single-tap and a double-tap gesture in different ways. For example, a single tap might select the object and a double tap might display a view for editing the item that was double-tapped. How is the responder object to know that a single tap is not the first part of a double tap? illustrates an implementation of the event-handling methods that increases the size of the receiving view upon a double-tap gesture and decreases it upon a single-tap gesture.
The following is a commentary on this code:
1. In touchesEnded:withEvent:, when the tap count is one, the responder object sends itself a
performSelector:withObject:afterDelay: message. The selector identifies another method
implemented by the responder to handle the single-tap gesture; the second parameter is an NSValue
or NSDictionary object that holds some state of the UITouch object; the delay is some reasonable
interval between a single- and a double-tap gesture.

Note: Because a touch object is mutated as it proceeds through a multi-touch sequence, you cannot retain a touch and assume that its state remains the same. (And you cannot copy a touch object because UITouch does not adopt the NSCopying protocol.) Thus if you want to preserve the state of a touch object, you should store those bits of state in a NSValue object, a dictionary, or a similar object. (The code in stores the location of the touch in a dictionary but does not use it; this code is included for purposes of illustration.)
2. In touchesBegan:withEvent:, if the tap count is two, the responder object cancels the pending delayed-perform invocation by calling the cancelPreviousPerformRequestsWithTarget: method of NSObject, passing itself as the argument. If the tap count is not two, the method identified by the selector in the previous step for single-tap gestures is invoked after the delay.
3. In touchesEnded:withEvent:, if the tap count is two, the responder performs the actions necessary for handling double-tap gestures.
Handling a single-tap gesture and a double-tap gesture
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *aTouch = [touches anyObject];
if (aTouch.tapCount == 2) {
[NSObject cancelPreviousPerformRequestsWithTarget:self];
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *theTouch = [touches anyObject];
if (theTouch.tapCount == 1) {
NSDictionary *touchLoc = [NSDictionary dictionaryWithObject:
[NSValue valueWithCGPoint:[theTouch locationInView:self]]
forKey:@"location"];
[self performSelector:@selector(handleSingleTap:) withObject:touchLoc
afterDelay:0.3];
} else if (theTouch.tapCount == 2) {
// Double-tap: increase image size by 10%"
CGRect myFrame = self.frame;
myFrame.size.width += self.frame.size.width * 0.1;
myFrame.size.height += self.frame.size.height * 0.1;
myFrame.origin.x -= (self.frame.origin.x * 0.1) / 2.0;
myFrame.origin.y -= (self.frame.origin.y * 0.1) / 2.0;
[UIView beginAnimations:nil context:NULL];
[self setFrame:myFrame];
[UIView commitAnimations];
}
}
- (void)handleSingleTap:(NSDictionary *)touches {
// Single-tap: decrease image size by 10%"
CGRect myFrame = self.frame;
myFrame.size.width -= self.frame.size.width * 0.1;
myFrame.size.height -= self.frame.size.height * 0.1;
myFrame.origin.x += (self.frame.origin.x * 0.1) / 2.0;
myFrame.origin.y += (self.frame.origin.y * 0.1) / 2.0;
[UIView beginAnimations:nil context:NULL];
[self setFrame:myFrame];
[UIView commitAnimations];
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
/* no state to clean up, so null implementation */
}


Handling Swipe and Drag Gestures

Horizontal and vertical swipes are a simple type of gesture that you can track easily from your own code and use to perform actions. To detect a swipe gesture, you have to track the movement of the user’s finger along the desired axis of motion, but it is up to you to determine what constitutes a swipe. In other words, you need to determine whether the user’s finger moved far enough, if it moved in a straight enough line, and if it went fast enough. You do that by storing the initial touch location and comparing it to the location reported by subsequent touch-moved events.
Program shows some basic tracking methods you could use to detect horizontal swipes in a view. In this example, the view stores the initial location of the touch in a startTouchPosition instance variable. As the user’s finger moves, the code compares the current touch location to the starting location to determine whether it is a swipe. If the touch moves too far vertically, it is not considered to be a swipe and is processed differently. If it continues along its horizontal trajectory, however, the code continues processing the event as if it were a swipe. The processing routines could then trigger an action once the swipe had progressed far enough horizontally to be considered a complete gesture. To detect swipe gestures in the vertical direction, you would use similar code but would swap the x and y components.
Tracking a swipe gesture in a view
#define HORIZ_SWIPE_DRAG_MIN 12
#define VERT_SWIPE_DRAG_MAX 4
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
// startTouchPosition is an instance variable
startTouchPosition = [touch locationInView:self];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint currentTouchPosition = [touch locationInView:self];
// To be a swipe, direction of touch must be horizontal and long enough.
if (fabsf(startTouchPosition.x - currentTouchPosition.x) >=
HORIZ_SWIPE_DRAG_MIN &&
fabsf(startTouchPosition.y - currentTouchPosition.y) <=
VERT_SWIPE_DRAG_MAX)
{
// It appears to be a swipe.
if (startTouchPosition.x < currentTouchPosition.x)
[self myProcessRightSwipe:touches withEvent:event];
else
[self myProcessLeftSwipe:touches withEvent:event];
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
startTouchPosition = 0.0;
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
startTouchPosition = 0.0;
}

Program shows an even simpler implementation of tracking a single touch, but this time for the purposes of  dragging the receiving view around the screen. In this instance, the responder class fully implements only the touchesMoved:withEvent: method, and in this method computes a delta value between the touch's current  location in the view and its previous location in the view. It then uses this delta value to reset the origin of the view’s frame.
Dragging a view using a single touch
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *aTouch = [touches anyObject];
CGPoint loc = [aTouch locationInView:self];
CGPoint prevloc = [aTouch previousLocationInView:self];
CGRect myFrame = self.frame;
float deltaX = loc.x - prevloc.x;
float deltaY = loc.y - prevloc.y;
myFrame.origin.x += deltaX;
myFrame.origin.y += deltaY;
[self setFrame:myFrame];
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
}
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event {
}

=> Handling a Complex Multi-Touch Sequence

Taps, drags, and swipes are simple gestures, typically involving only a single touch. Handling a touch event consisting of two or more touches is a more complicated affair. You may have to track all touches through all phases, recording the touch attributes that have changed and altering internal state appropriately. There are a couple of things you should do when tracking and handling multiple touches:
■ Set the multipleTouchEnabled property of the view to YES.
■ Use a Core Foundation dictionary object (CFDictionaryRef) to track the mutations of touches through their phases during the event.
When handling an event with multiple touches, you often store initial bits of each touch’s state for later comparison with the mutated UITouch instance. As an example, say you want to compare the final location of each touch with its original location. In the touchesBegan:withEvent: method, you can obtain the original location of each touch from the locationInView: method and store those in a CFDictionaryRef object using the addresses of the UITouch objects as keys. Then, in the touchesEnded:withEvent: method you can use the address of each passed-in UITouch object to obtain the object’s original location and compare that with its current location. (You should use a CFDictionaryRef type rather than an NSDictionary object; the latter copies its keys, but the UITouch class does not adopt the NSCopying protocol, which is required for object copying.)
illustrates how you might store beginning locations of UITouch objects in a Core Foundation dictionary. (This and the following example are from the MultiTouchDemo example project.)
Storing the beginning locations of multiple touches
- (void)cacheBeginPointForTouches:(NSSet *)touches
{
if ([touches count] > 0) {
for (UITouch *touch in touches) {
CGPoint *point = (CGPoint *)CFDictionaryGetValue(touchBeginPoints,
touch);
if (point == NULL) {
point = (CGPoint *)malloc(sizeof(CGPoint));
CFDictionarySetValue(touchBeginPoints, touch, point);
}
*point = [touch locationInView:view.superview];
}
}
}

 illustrates how to retrieve those initial locations stored in the dictionary. It also gets the current locations of the same touches. It uses these values in computing an affine transformation (not shown).
 Retrieving the initial locations of touch objects
- (CGAffineTransform)incrementalTransformWithTouches:(NSSet *)touches {
NSArray *sortedTouches = [[touches allObjects]
sortedArrayUsingSelector:@selector(compareAddress:)];
// other code here ...
UITouch *touch1 = [sortedTouches objectAtIndex:0];
UITouch *touch2 = [sortedTouches objectAtIndex:1];
CGPoint beginPoint1 = *(CGPoint *)CFDictionaryGetValue(touchBeginPoints,
touch1);
CGPoint currentPoint1 = [touch1 locationInView:view.superview];
CGPoint beginPoint2 = *(CGPoint *)CFDictionaryGetValue(touchBeginPoints,
touch2);
CGPoint currentPoint2 = [touch2 locationInView:view.superview];
// compute the affine transform...
}

Although the code example in Listing 3-8 doesn’t use a dictionary to track touch mutations, it also handles multiple touches during an event. It shows a custom UIView object responding to touches by animating the movement of a “Welcome” placard around the screen as a finger moves it and changing the language of the welcome when the user makes a double-tap gesture. (The code in this example comes from the MoveMe sample code project, which you can examine to get a better understanding of the event-handling context.)
Handling a complex multi-touch sequence
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
// Only move the placard view if the touch was in the placard view
if ([touch view] != placardView) {
// On double tap outside placard view, update placard's display string
if ([touch tapCount] == 2) {
[placardView setupNextDisplayString];
}
return;
}
// "Pulse" the placard view by scaling up then down
// Use UIView's built-in animation
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.5];
CGAffineTransform transform = CGAffineTransformMakeScale(1.2, 1.2);
placardView.transform = transform;
[UIView commitAnimations];
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.5];
transform = CGAffineTransformMakeScale(1.1, 1.1);
placardView.transform = transform;
[UIView commitAnimations];
// Move the placardView to under the touch
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.25];
placardView.center = [self convertPoint:[touch locationInView:self]
fromView:placardView];
[UIView commitAnimations];
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
// If the touch was in the placardView, move the placardView to its location
if ([touch view] == placardView) {
CGPoint location = [touch locationInView:self];
location = [self convertPoint:location fromView:placardView];
placardView.center = location;
return;
}
}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [[event allTouches] anyObject];
// If the touch was in the placardView, bounce it back to the center
if ([touch view] == placardView) {
// Disable user interaction so subsequent touches don't interfere with
animation
self.userInteractionEnabled = NO;
[self animatePlacardViewToCenter];
return;
}
}

Note: Custom views that redraw themselves in response to events they handle generally should only set drawing state in the event-handling methods and perform all of the drawing in the drawRect: method. To learn more about drawing view content. To find out when the last finger in a multi-touch sequence is lifted from a view, compare the number of UITouch objects in the passed-in set with the number of touches for the view maintained by the passed-in UIEvent object. If they are the same, then the multi-touch sequence has concluded. illustrates how to do this in code.
Determining when the last touch in a multi-touch sequence has ended
- (void)touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event {
if ([touches count] == [[event touchesForView:self] count]) {
// last finger has lifted....
}
}

Remember that the passed-in set contains all touch objects associated with the receiving view that are new or changed for the given phase whereas the touch objects returned from touchesForView: includes all objects associated with the specified view.

No comments: