Description
I have some files including obj, camera extrinsics (world2cam matrix, e.g. R and T) and object photo with the same viewpoint. I try to render this obj with Pytorch3d, But I got the rendered result with a little view shifted.
For comparison, I show the original photo, OpenGL rendering result and Pytorch3d rendering result.
The original photo in the same viewpoint:
https://i.loli.net/2020/07/30/ysbBakuDZFrSi9C.png
The OpenGL rendering result:
https://i.loli.net/2020/07/30/Lk7w6SOqiAWyoMF.png
The Pytorch3d rendering result:
https://i.loli.net/2020/07/30/lWmNhfPxIDU3JyG.png
For comparison, I made a GIF with the original photo, OpenGL rendering result and Pytorch3d rendering result.
https://s1.ax1x.com/2020/07/30/aM9ZlR.gif
I think maybe there is some different in the camera centre. I noticed that when pytorch3d calculates the camera centre, it is obtained by inverting the world2cam_pytorch3d matrix, where:
_
world2cam_pytorch3d = [R, 0; T, 1]
camera_center = world2cam_pytorch3d.inverse()[3, :3]
_
While the OpenGL calculates the camera centre by inverting the world2cam matrix, where:
_
world2cam = [R, T; 0, 1]
camera_center = world2cam.inverse()[:3, 3]
_
I output the results of the two calculation methods and found that they are not exactly the same:
https://i.loli.net/2020/07/30/yVYuoPgceEpIfKd.png
So I confused about this issue, and it is no problem to transform the R and T in the form of pytorch3d?