Skip to content

Commit 4123578

Browse files
authored
Fisheye camera docs for UE4 (#9373)
* added fisheye camera * updated script parameters
1 parent a4eca2e commit 4123578

File tree

3 files changed

+68
-18
lines changed

3 files changed

+68
-18
lines changed

Docs/catalogue_scripts.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -95,20 +95,21 @@ This script launches manual control with Chrono physics, using the Sedan powertr
9595

9696
* Script filename: `manual_control_fisheye.py`
9797

98-
This script launches manual control with the fisheye camera model. It has some additional command line arguments for fisheye camera model parameters:
98+
This script launches manual control with the fisheye camera model. This script has the same key commands and command line arguments as [Manual control](#manual-control). It has some additional command line arguments for camera model parameters:
9999

100100
| Argument | Short form | Default | Description |
101101
|----------|------------|---------|-------------|
102102
| --fov | | 90.0 | Camera field of view angle |
103103
| --fov_mask | | Not active | Mask pixels outside of FOV |
104-
| --fov_fade_size | | 0.0 | Fade of FOV edge in pixels |
105-
| --model | | perspective | Fisheye model: |
106-
| --k0 | | 0.0 | k0 Kannala-Brandt parameter |
107-
| --k1 | | 0.0 | k1 Kannala-Brandt parameter |
108-
| --k2 | | 0.0 | k2 Kannala-Brandt parameter |
109-
| --k3 | | 0.0 | k3 Kannala-Brandt parameter |
110-
| --equirectangular | | Not active | Image rectification |
111-
| --perspective | | Not active | Image rectification |
104+
| --fov_fade_size | | 0.0 | Fade of FOV edge, with the given broadening factor |
105+
| --model | | perspective | Camera model: <br>perspective <br>equidistant <br>equisolid <br>orthographic <br>stereographic <br>kannala-brandt |
106+
| --k0 | | 0.0831 | k0 Kannala-Brandt parameter |
107+
| --k1 | | 0.0111 | k1 Kannala-Brandt parameter |
108+
| --k2 | | 0.00858 | k2 Kannala-Brandt parameter |
109+
| --k3 | | 0.000854 | k3 Kannala-Brandt parameter |
110+
| --equirectangular | | Not active | Equirectangular projection |
111+
| --perspective | | Not active | Perspective projection |
112+
| --longitude_shift | 0.0 | Shifts view center by degrees for equirectangular model |
112113

113114
#### Steering wheel
114115

Docs/core_sensors.md

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -45,18 +45,23 @@ blueprint.set_attribute('sensor_tick', '1.0')
4545

4646
### Spawning
4747

48-
`attachment_to` and `attachment_type`, are crucial. Sensors should be attached to a parent actor, usually a vehicle, to follow it around and gather the information. The attachment type will determine how its position is updated regarding said vehicle.
49-
50-
* __Rigid attachment.__ Movement is strict regarding its parent location. This is the proper attachment to retrieve data from the simulation.
51-
* __SpringArm attachment.__ Movement is eased with little accelerations and decelerations. This attachment is only recommended to record videos from the simulation. The movement is smooth and "hops" are avoided when updating the cameras' positions.
52-
* __SpringArmGhost attachment.__ Like the previous one but without doing the collision test, so the camera or sensor could cross walls or other geometries.
48+
Sensors can be placed anywhere in the map. Normally they are attached to a parent actor, such as a a vehicle, to move around the simulation like a sensor physically mounted to a vehicle. The `spawn_actor()` method requires a sensor blueprint chosen from the blueprint library and a transform. In the case of an unattached sensor, the transform will define it's absolute position in the CARLA world. The `attach_to` parameter defines the actor to attach the camera to. For a sensor attached to an actor, the transform defines its relative position to the actor in the actor's coordinate system.
5349

5450
```py
51+
my_vehicle = world.spawn_actor(vehicle_blueprint, spawn_point)
5552
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
56-
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
53+
sensor = world.spawn_actor(sensor_blueprint, transform, attach_to=my_vehicle)
54+
```
55+
56+
The `attachment_type` parameter defines how the sensor's movement will be governed by the movement of its parent actor.
57+
58+
* __Rigid attachment__ (`carla.AttachmentType.Rigid`): Movement strictly follows the parent actors's pose. This is the most common choice for normal vehicle sensors like cameras and LIDARs. It is the default attachment type setting.
59+
* __SpringArm attachment__ (`carla.AttachmentType.SpringArm`): Movement is eased with little accelerations and decelerations. This attachment is recommended to record videos from the simulation. The movement is smooth and "hops" are avoided when updating the cameras' positions.
60+
* __SpringArmGhost attachment__ (`carla.AttachmentType.SpringArmGhost`): Like SpringArm but without collision testing, so the camera or sensor could cross walls or other geometries.
61+
62+
```py
63+
sensor = world.spawn_actor(sensor_blueprint, transform, attach_to=my_vehicle, attachment_type=carla.AttachmentType.SpringArm)
5764
```
58-
!!! Important
59-
When spawning with attachment, location must be relative to the parent actor.
6065

6166
### Listening
6267

@@ -112,8 +117,9 @@ Take a shot of the world from their point of view. For cameras that return [carl
112117

113118
|Sensor |Output | Overview |
114119
| ----------------- | ---------- | ------------------ |
115-
| [Depth](ref_sensors.md#depth-camera) | [carla.Image](<../python_api#carlaimage>) |Renders the depth of the elements in the field of view in a gray-scale map. |
116120
| [RGB](ref_sensors.md#rgb-camera) | [carla.Image](<../python_api#carlaimage>) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
121+
| [Wide angle](ref_sensors.md#wide-angle-cameras) | [carla.Image](<../python_api#carlaimage>) | Models alternative camera models including wide-angle, fisheye and 360 degree cameras. |
122+
| [Depth](ref_sensors.md#depth-camera) | [carla.Image](<../python_api#carlaimage>) | Renders the depth of the elements in the field of view in a gray-scale map. |
117123
| [Optical Flow](ref_sensors.md#optical-flow-camera) | [carla.Image](<../python_api#carlaimage>) | Renders the motion of every pixel from the camera. |
118124
| [Semantic segmentation](ref_sensors.md#semantic-segmentation-camera) | [carla.Image](<../python_api#carlaimage>) | Renders elements in the field of view with a specific color according to their tags. |
119125
| [Instance segmentation](ref_sensors.md#instance-segmentation-camera) | [carla.Image](<../python_api#carlaimage>) | Renders elements in the field of view with a specific color according to their tags and a unique object ID. |

Docs/ref_sensors.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@
99
- [__Obstacle detector__](#obstacle-detector)
1010
- [__Radar sensor__](#radar-sensor)
1111
- [__RGB camera__](#rgb-camera)
12+
- [__Wide-angle camera__](#wide-angle-cameras)
1213
- [__RSS sensor__](#rss-sensor)
1314
- [__Semantic LIDAR sensor__](#semantic-lidar-sensor)
1415
- [__Semantic segmentation camera__](#semantic-segmentation-camera)
@@ -559,9 +560,51 @@ Since these effects are provided by UE, please make sure to check their document
559560
| `fov` | float | Horizontal field of view in degrees. |
560561
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
561562

563+
---
564+
565+
## Wide-angle cameras
566+
567+
* __Blueprint RGB:__ sensor.camera.rgb.wide_angle_lens
568+
* __Blueprint depth:__ sensor.camera.depth.wide_angle_lens
569+
* __Blueprint semantic segmentation:__ sensor.camera.semantic_segmentation.wide_angle_lens
570+
* __Blueprint instance segmentation:__ sensor.camera.instance_segmentation.wide_angle_lens
571+
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise)..
562572

573+
The wide-angle camera models multiple types of specialized cameras such as standard wide-angle, 360 degree cameras and fisheye lenses. The wide-angle camera model offers standard RGB output along with depth, semantic segmentation and instance segmentation. There are numerous projection models available, including perspective, stereographic, equidistant, equisolid, orthographic and Kannala-Brandt.
574+
575+
The [Kannala-Brandt](https://www.researchgate.net/publication/6899685_A_Generic_Camera_Model_and_Calibration_Method_for_Conventional_Wide-Angle_and_Fish-Eye_Lenses) model used matches the [implementation used in OpenCV](https://docs.opencv.org/3.4/db/d58/group__calib3d__fisheye.html).
576+
577+
| Blueprint attribute | Type | Default | Description |
578+
| ----------------------------------------------------- | ----------------------------------------------------- | ----------------------------------------------------- | ----------------------------------------------------- |
579+
| `camera_model` | str | `perspective` | Options: <br>`perspective`,<br>`stereographic`,<br>`equidistant`,<br>`equisolid`,<br>`orthographic`,<br>`kannala-brandt`|
580+
| `fov` | float | 90\.0 | Horizontal field of view in degrees. |
581+
| `image_size_x` | int | 800 | Image width in pixels. |
582+
| `image_size_y` | int | 600 | Image height in pixels. |
583+
| `k0` | float | 0.0831 | Kannala-Brandt K0 parameter. |
584+
| `k1` | float | 0.0111 | Kannala-Brandt K1 parameter. |
585+
| `k2` | float | 0.00858 | Kannala-Brandt K2 parameter. |
586+
| `k3` | float | 0.000854 | Kannala-Brandt K3 parameter. |
587+
| `sensor_tick` | float | 0\.0 | Simulation seconds between sensor captures (ticks). |
588+
| `fov_mask` | bool | false | Masks out pixels outside of the FOV. |
589+
| `fov_fade_size` | float | 0.0 | Amount of blur of the edge of the `fov_mask`. |
590+
| `perspective` | bool | false | Turns on perspective mode. |
591+
| `equirectangular` | bool | false | Turns on equirectangular projection. |
592+
| `longitude_offset` | float | 0.0 | Shifts view center by degrees for equirectangular model |
593+
594+
#### Output attributes
595+
596+
| Sensor data attribute | Type | Description |
597+
| ----------------------- | ----------------------- | ----------------------- |
598+
| `frame` | int | Frame number when the measurement took place. |
599+
| `timestamp` | double | Simulation time of the measurement in seconds since the beginning of the episode. |
600+
| `transform` | [carla.Transform](<../python_api#carlatransform>) | Location and rotation in world coordinates of the sensor at the time of the measurement. |
601+
| `width` | int | Image width in pixels. |
602+
| `height` | int | Image height in pixels. |
603+
| `fov` | float | Horizontal field of view in degrees. |
604+
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
563605

564606
---
607+
565608
## RSS sensor
566609

567610
* __Blueprint:__ sensor.other.rss

0 commit comments

Comments
 (0)