
How does motion tracking work?
The tracking behaviors in Motion analyze an area of pixels known as a reference pattern in a clip to “lock onto” the pattern as it moves across the canvas. You specify the reference pattern by dragging one or more onscreen trackers to the area of the clip you want to analyze. Motion analyzes and records the motion of the designated reference pattern.
Ideally, the reference pattern should be a consistent, easily identifiable detail with high contrast. This makes the pattern easier to track.
Each of the six tracking behaviors in Motion is optimized to perform a different type of motion tracking:
Analyze Motion: Generates and stores tracking information from a source video clip that can be applied to other objects. See Analyze and record movement in a clip.
Match Move: Applies the movement of a source video clip (or animated object) to another object so they appear locked together. See Intro to match moving.
Stabilize: Removes unwanted motion in a video clip, such as camera jitter. See Stabilize a shaky clip.
Unstabilize: Applies the movement recorded by a Stabilize behavior to a video clip or object. For example, you can use this behavior to match the camera shake in a clip to elements added in post-production. See Unstabilize a clip.
Track Points: Matches the control points of a shape, paint stroke, or mask to a reference feature in a video clip. For example, you can draw a mask around a car in a clip and then track the control points of the mask to the moving car, cutting the car out of the background. See Track shapes, masks, and paint strokes.
Track: Matches the position or anchor point parameters of shapes, images, or filters to a reference feature in a video clip. For example, you can match the center of a Circle Blur filter to follow and obscure a person’s face. See Track the position of a filter or object.
The Analyze, Match Move, and Track behaviors have two optional modes of analyzing reference patterns:
Object mode: Uses machine learning or point cloud analysis (or a combination of both methods) to recognize and track subjects such as people or faces, pets, cars, or other common patterns. Or, if you want to manually specify a reference pattern, you can drag the onscreen object tracker (an adjustable onscreen control, shown below) to the area in the canvas you want to analyze.
Note: Mac computers with Apple silicon use an enhanced machine learning model to analyze the movement of faces and other objects, resulting in accelerated and more accurate tracking results compared with Intel-based Macs.
Point mode: Analyzes a pixel pattern within a search region, then tracks that pattern as it moves over time. You specify the reference pattern to be analyzed by dragging one or more onscreen point trackers (a yellow or red crosshair in a circle, shown below) to the area in the canvas you want to analyze.
The more point trackers you use, the more spatial information you’ll record: One-point tracking records position data; two-point tracking and four-point tracking record position, rotation, and scale data (by comparing the relative change between the points); multiple-point tracking can record all the control points (vertices) on a shape.
Note: Tracking in Motion is not 3D—it doesn’t occur in Z space (depth). When you analyze two features in a clip—and that clip is moving in 3D space—you record the changes in position, scale, or rotation over time in the clip, but not its actual 3D transformation.
As it analyzes motion in your project, Motion records the data, which you can then apply to any other object in your project. Additionally, motion created by keyframing or behaviors can also be applied to other objects using some tracking behaviors.
Download this guide: PDF