Skip to content

Volume rendering problem #847

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ZX-Yin opened this issue Sep 18, 2021 · 6 comments
Closed

Volume rendering problem #847

ZX-Yin opened this issue Sep 18, 2021 · 6 comments
Assignees
Labels
how to How to use PyTorch3D in my project potential-bug Potential bug to flag an issue that needs to be looked into Stale

Comments

@ZX-Yin
Copy link

ZX-Yin commented Sep 18, 2021

❓ Questions on how to use PyTorch3D volume rendering

Thanks for this amazing work.
We are using pytorch3d for volume rendering, poses are following the OpenCV convention, and the volume is obtained by TSDF fusion algorithm, but I cannot get the right rendered image. All of the data comes from ScanNet. The rendered image is quite different from the ground-truth image, below are examples.
output
100
Does anyone encounter the same problem?

Here is my code:

import sys
import torch
need_pytorch3d=False
try:
    import pytorch3d
except ModuleNotFoundError:
    need_pytorch3d=True
if need_pytorch3d:
    if torch.__version__.startswith("1.9") and sys.platform.startswith("linux"):
        # We try to install PyTorch3D via a released wheel.
        version_str="".join([
            f"py3{sys.version_info.minor}_cu",
            torch.version.cuda.replace(".",""),
            f"_pyt{torch.__version__[0:5:2]}"
        ])
        !pip install pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
    else:
        # We try to install PyTorch3D from source.
        !curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
        !tar xzf 1.10.0.tar.gz
        os.environ["CUB_HOME"] = os.getcwd() + "/cub-1.10.0"
        !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'

import os
import sys
import time
import json
import glob
import torch
import math
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
from IPython import display

# Data structures and functions for rendering
from pytorch3d.structures import Volumes
from pytorch3d.renderer import (
    FoVPerspectiveCameras, 
    PerspectiveCameras,
    OrthographicCameras,
    VolumeRenderer,
    NDCGridRaysampler,
    MonteCarloRaysampler,
    GridRaysampler,
    EmissionAbsorptionRaymarcher,
    AbsorptionOnlyRaymarcher,
)
from pytorch3d.transforms import so3_exp_map
from pytorch3d.utils import cameras_from_opencv_projection

# obtain the utilized device
if torch.cuda.is_available():
    device = torch.device("cuda:0")
    torch.cuda.set_device(device)
else:
    device = torch.device("cpu")


render_size = (640, 480)
width, height = render_size
volume_extent_world = 10.
raysampler = GridRaysampler(
    image_width=width,
    image_height=height,
    n_pts_per_ray=200,
    min_x = 0,
    min_y = 0,
    max_x = render_size[0],
    max_y = render_size[1],
    min_depth = 0.04,
    max_depth = volume_extent_world,
)

raymarcher = EmissionAbsorptionRaymarcher()

renderer = VolumeRenderer(
    raysampler=raysampler, raymarcher=raymarcher
)

K = np.loadtxt(os.path.join(src_root, 'intrinsic/intrinsic_depth.txt')).astype(np.float32)
focal = torch.Tensor([K[0, 0], K[1, 1]]).unsqueeze(0)
principal = torch.Tensor([K[0, -1], K[1, -1]]).unsqueeze(0)
pose = np.loadtxt(os.path.join(src_root, 'pose/100.txt')).astype(np.float32)

K = torch.from_numpy(K).unsqueeze(0)
R = torch.from_numpy(pose[:3, :3]).unsqueeze(0)
T = torch.from_numpy(pose[:3, -1]).unsqueeze(0)
image_size = torch.Tensor([height, width]).unsqueeze(0)

device = 'cuda:0'
batch_cameras = PerspectiveCameras(
    focal_length=focal, principal_point=principal, in_ndc=False, R=R, T=T, image_size=image_size, device=device
)

densities = torch.from_numpy(tsdf['tsdf'].astype(np.float32)).unsqueeze(0).unsqueeze(0)
colors = torch.from_numpy(tsdf['color'].astype(np.float32)).unsqueeze(0)
volume_translation = (torch.from_numpy(tsdf['origin'][0].astype(np.float32))).unsqueeze(0)

densities = 1.0 - densities.abs()
colors = colors / 255.
voxel_size = torch.Tensor(tsdf['voxel_size']).unsqueeze(0)

volume = Volumes(
    densities = densities,
    features = colors,
    voxel_size = voxel_size,
    volume_translation = volume_translation
).to(device)

outp = renderer(cameras=batch_cameras, volumes=volume)
rendered_image, rendered_silhouette = outp[0].split([3, 1], dim=-1)
fig, ax = plt.subplots(1, 2, figsize=(10, 10))
ax = ax.ravel()
clamp_and_detach = lambda x: x.clamp(0.0, 1.0).cpu().detach().numpy()
ax[0].imshow(clamp_and_detach(rendered_image[0]))
ax[1].imshow(clamp_and_detach(rendered_silhouette[0]))
for ax_, title_ in zip(
    ax,
    ("rendered image", "rendered silhouette")
):
    ax_.grid("off")
    ax_.axis("off")
    ax_.set_title(title_)
fig.canvas.draw(); fig.show()```
@nikhilaravi nikhilaravi added the how to How to use PyTorch3D in my project label Sep 20, 2021
@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Oct 21, 2021
@github-actions
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

@bottler bottler removed the Stale label Oct 26, 2021
@bottler bottler reopened this Oct 26, 2021
@nikhilaravi nikhilaravi added the potential-bug Potential bug to flag an issue that needs to be looked into label Nov 17, 2021
@nikhilaravi
Copy link
Contributor

@jasonyinn are you converting the cameras to the PyTorch3D convention? Here is a related issue for how to do this: #287.

@ZX-Yin
Copy link
Author

ZX-Yin commented Nov 17, 2021

@jasonyinn are you converting the cameras to the PyTorch3D convention? Here is a related issue for how to do this: #287.

Thank you for your reply.
I have converted them to PyTorch3D convention, but it didn't work. I also tried the function pytorch3d.utils.cameras_from_opencv_projection to get the camera from the original pose and intrinsic which follows the OpenCV convention, and it failed too.

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Dec 18, 2021
@github-actions
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
how to How to use PyTorch3D in my project potential-bug Potential bug to flag an issue that needs to be looked into Stale
Projects
None yet
Development

No branches or pull requests

4 participants