Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
660 views
in Technique[技术] by (71.8m points)

python - 3D matrix perspective transform

I am using shape from shading to generate a Digital Terrain Model (DTM) of an image taken using a camera mounted on a mobile platform. The algorithm written in Python seems to work reasonably well however the output is at an incline and a bit spherical so I'm suspecting that I need to remove perspective distortion and barrelling from the DTM.

The data is available here in case anyone is interested in having a go at this.

The camera is mounted at an inclination of 41 degrees and has the following camera and distortion matrices:

    cam_matrix = numpy.matrix([[246.00559,0.00000,169.87374],[0.00000,247.37317,132.21396],[0.00000,0.00000,1.00000]])
    distortion_matrix = numpy.matrix([0.04674, -0.11775, -0.00464, -0.00346, 0.00000])

How can I apply perspective transform and remove the barreling distortion from this matrix to obtain a flattened DTM?

I have attempted this using OpenCV but it doesn't work as OpenCv is expecting an image and the transforms simply move pixels around rather than manipulate their value. I have also researched Numpy and Scipy but haven't arrived to a conclusion or a solution yet. I am somewhat familiar with the theory behind these transforms but have mostly worked on 2D versions.

Any ideas?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You can use a 4 x 4 transformation matrix which is inversible and allows a bi-directional tranformation between the two coordinate systems that you want.

If you know the three rotations a, b and g, about x, y, z respectively, using the right-hand rule. The x0, y0, z0 are the translations between the origins of the two coordinate systems.

The transformation matrix is defined as:

T = np.array([[ cos(b)*cos(g), (sin(a)*sin(b)*cos(g) + cos(a)*sin(g)), (sin(a)*sin(g) - cos(a)*sin(b)*cos(g)), x0],
              [-cos(b)*sin(g), (cos(a)*cos(g) - sin(a)*sin(b)*sin(g)), (sin(a)*cos(g) + cos(a)*sin(b)*sin(g)), y0],
              [        sin(b), -sin(a)*cos(b), cos(a)*cos(b), z0]
              [ 0, 0, 0, 1])

To use it efficiently you should put your points in a 2-D array like:

orig = np.array([[x0, x1, ..., xn],
                 [y0, y1, ..., yn],
                 [z0, z1, ..., zn],
                 [ 1,  1, ...,  1]])

Then:

new = T.dot(orig)

will give you the transformed points.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...