D3PLOT 22.1

Match Image

Match Image

Automatically aligns the current analysis image with the background by calculating the transformation parameters required.

Lining up an image requires the calculation of 11 unknowns:

  • The camera position (3 coordinates)
  • The direction in which the camera is pointing (3 vector terms)
  • The "Up" axis of the camera (3 vector terms)
  • The distance of the object from the camera, ie perspective distance (1 term)
  • The focal length of the camera lens, ie image scale (1 term)

In the orthographic projection case, where the object is viewed in a parallel sided frustrum, the perspective distance can be omitted leaving only 10 values to be computed. However when photographs rather than computer-generated images are to be matched, which is normally the case, these implicitly use perspective projection and 11 variablkes must be computed.

Each matching point on the image has a (x,y,z) coordinate, so to find 10 or 11 values a minimum of 4 points, giving 12 independent variables, are required. This calculation can be performed by D3PLOT if four or more well-chosen nodes on the model are matched to their corresponding points on the image, although in practice 5 or 6 points are required for a good match, mainly due to the difficulty of choosing well spaced points in the screen local Z (depth) direction.

Add point(s) Defining <node : point> pairs for matching

In the (artificial) example below the green image on the left has been read in as a background image, and the task is to get the red analysis image on the right to lie on top of it.

The user has defined 3 points so far: the nodes, identified by yellow pick symbols on the right, correspond to their matching points (red symbols and labels) on the left; the blue line shows which points and nodes are associated. These are screen-picked by selecting first the node, and then the corresponding point, and so on for the next pair.

Calculate: aligning analysis with image

Once four or more <node : point> pairs have been defined it is possible to calculate the revised view. This will calculate the revised viewing parameters and update the image immediately. If the images can be matched and the points have been well chosen then the analysis should lie exactly over the target image.

Edit...: correcting poorly chosen points

In the example below points have deliberately been chosen badly to obtain a poor match. (The error here is choosing points, ringed in blue, that lie more or less in a plane, making it difficult to calculate perspective distance correctly. In addition choosing only four pointscan be inadequate unless they are well spaced in all of (x,y,z) coordinate space, and more can be required for a good solution.)

To edit a point screen-pick either its node or point (or select it from the menu), then repick its node or point.

Delete and Restart: Deleting points

Delete allows you to delete individual points by selecting them as above. Each point is deleted immediately.
Restart deletes all points letting you make a fresh start.

You can Add , Edit and Delete points in any order. Here is the example above with 6 points (circled in blue) chosen rather more judiciously, and it can be seen that the correspondence is now very good.

What is stored for matching

<Node : point> data is stored on a per-window basis, so it is not possible to apply matching data in Window #1 directly to windows #2, etc. However you can use the "Export view" function on the window's [--] options popup menu to export the current viewing parameters to all other active windows.

"Node" data is stored as a reference to a node in a model, and the current state's coordinate is used for matching purposes. Therefore if you need to match data during an animation you need to choose the state to be used for the matching process.

"Point" data is stored as a parametric (x,y) screen space coordinate, so points will remain valid so long as the aspect ratio of the window remains the same. However in most cases if a window is resized it is best to delete all the points and start again if further matching is required.

Trouble-shooting image matching

If you are having problems getting a good match between image and analysis the following trouble-shooting guide may help.

Choosing points that are all on a plane can cause problems

It is a common problem that many background images do not have much variation of depth - after all photographs are 2D - and as a consequence there is a tendency to pick points for matching that lie more or less on the same plane of depth with respect to the observer. This will usually give poor matching because it is very hard for D3PLOT to calculate perspective distance and scale when there is little variation of depth between points.

When selecting points the best match is achieved if you imagine a cube around the model, and try to pick points that are on a mixture of its near and far faces, as well as spread out left/right and top/bottom. There is no need to pick all 8 cube vertices, as four well-conditioned points are enough, but if perspective is active it is important to try to choose points that include a variation of depth.

Adding more points won't help if they are ill-conditioned

If the points you have chosen have not been defined accurately enough, or lie on a plane, then adding more similar points will not normally improve the solution - it will simply take longer to calculate the wrong answer.

It is far better to define 4 or 5 well-chosen points, and to delete any that only give a vague match between model and image.

Matching a model to a series of frames of an animation

At present image matching is "static". There is no provision for matching views separately to each frame of an animation. However, the model view can be matched to the first frame of the animation, which sets the model viewpoint at the real camera's position. Given that the camera's position is fixed relative to a known position throughout the animation, such as it moves with the model or is fixed to the ground, we can fix the model viewpoint, too in the same way.

In cases where the camera moves with the model you can use Deform, Fixed Node or Shift Deformed to track model movement.

In cases where the camera is fixed to the ground, you don't have to do anything because model viewpoint is fixed in the global coordinates at default.

After you have matched the model view to the first frame using the same technique as matching it to the background image, you need to match the timing, too. In this example we have a film with 0.002s per frame and a simulation analysis with 0.005s per state. To synchronize them we need every 5 frames of the film and every 2 states of the simulation analysis. You can set this at Movie Options.