Skip to content

Pulsar interface for OpenCV-style cameras #519

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jutanke opened this issue Jan 12, 2021 · 3 comments
Closed

Pulsar interface for OpenCV-style cameras #519

jutanke opened this issue Jan 12, 2021 · 3 comments
Assignees

Comments

@jutanke
Copy link

jutanke commented Jan 12, 2021

🚀 Feature

Ease the use of OpenCV-style cameras with the Pulsar interface.

Motivation

At the moment it is not clear how to convert between cameras obtained from an OpenCV calibration and Pulsar cameras, e.g. how to parse the focal lengths and how to define the sensor size. On top of that, there are some inconsistencies between Pulsar and OpenCV, e.g. OpenCV uses the principle point while Pulsar uses principle point offsets / OpenCV uses the camera translation vector while Pulsar directly uses the camera position in world coords. Adding this functionality to Pulsar would be greatly beneficial as many datasets and calibration toolboxes use OpenCV-style camera calibrations.

Pitch

An easy solution to this could be to add a new renderer class that inherits from pytorch3d/renderer/points/pulsar/renderer/Renderer and which does all the conversions internally, most importantly:

  • transforms camera translation tvec to camera position C
  • converts focal_length in image space to fitting sensor_size and focal_length in world space (as required by Pulsar)
  • converts principle_point to principle_point_offsets (as required by Pulsar).

As a class name I would suggest one of those:

  • PinholeRenderer
  • OpenCVRenderer
  • MVGRenderer (Multiple-View-Geometry)

I would also suggest to adapt the forward function of the class in such a way that the (OpenCV) camera parameters are passed independently to make the interface more explicit and self-explaining:

def forward(
        self,
        vert_pos: torch.Tensor,
        vert_col: torch.Tensor,
        vert_rad: torch.Tensor,
        R: torch.Tensor,
        K: torch.Tensor,
        tvec: torch.Tensor,
        ...
    ) -> Union[torch.Tensor, Tuple[torch.Tensor, Optional[torch.Tensor]]]:
@classner
Copy link
Contributor

@jutanke Sounds good! How about making this a function that can be used in the forward pass? I don't think this would warrant its own class (for a plethora of reasons; if you're interested I can go into detail), but would be great to have as a function.

Most strikingly, there's no 'state' associated with the conversion. A function that takes the OpenCV style parameterization as input and produces the pulsar parameterization as output would transparently do so and could be used in the forward pass with the regular pulsar renderer. So for example: pulsar_cam = pulsar_from_opencv(K, R, T), and then pulsar_cam is a tensor that can directly used as input to the renderer.

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Jun 23, 2021
@github-actions
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants