Description
🚀 Feature
Ease the use of OpenCV-style cameras with the Pulsar interface.
Motivation
At the moment it is not clear how to convert between cameras obtained from an OpenCV calibration and Pulsar cameras, e.g. how to parse the focal lengths and how to define the sensor size. On top of that, there are some inconsistencies between Pulsar and OpenCV, e.g. OpenCV uses the principle point while Pulsar uses principle point offsets / OpenCV uses the camera translation vector while Pulsar directly uses the camera position in world coords. Adding this functionality to Pulsar would be greatly beneficial as many datasets and calibration toolboxes use OpenCV-style camera calibrations.
Pitch
An easy solution to this could be to add a new renderer class that inherits from pytorch3d/renderer/points/pulsar/renderer/Renderer
and which does all the conversions internally, most importantly:
- transforms camera translation
tvec
to camera positionC
- converts
focal_length
in image space to fittingsensor_size
andfocal_length
in world space (as required by Pulsar) - converts
principle_point
toprinciple_point_offsets
(as required by Pulsar).
As a class name I would suggest one of those:
PinholeRenderer
OpenCVRenderer
MVGRenderer
(Multiple-View-Geometry)
I would also suggest to adapt the forward
function of the class in such a way that the (OpenCV) camera parameters are passed independently to make the interface more explicit and self-explaining:
def forward(
self,
vert_pos: torch.Tensor,
vert_col: torch.Tensor,
vert_rad: torch.Tensor,
R: torch.Tensor,
K: torch.Tensor,
tvec: torch.Tensor,
...
) -> Union[torch.Tensor, Tuple[torch.Tensor, Optional[torch.Tensor]]]: