Skip to content

Commit b0bb4ca

Browse files
committed
torchvision: Fix typos
1 parent dae528d commit b0bb4ca

File tree

4 files changed

+8
-8
lines changed

4 files changed

+8
-8
lines changed

torchvision/datapoints/_dataset_wrapper.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -37,17 +37,17 @@ def wrap_dataset_for_transforms_v2(dataset, target_keys=None):
3737
* :class:`~torchvision.datasets.CocoDetection`: Instead of returning the target as list of dicts, the wrapper
3838
returns a dict of lists. In addition, the key-value-pairs ``"boxes"`` (in ``XYXY`` coordinate format),
3939
``"masks"`` and ``"labels"`` are added and wrap the data in the corresponding ``torchvision.datapoints``.
40-
The original keys are preserved. If ``target_keys`` is ommitted, returns only the values for the
40+
The original keys are preserved. If ``target_keys`` is omitted, returns only the values for the
4141
``"image_id"``, ``"boxes"``, and ``"labels"``.
4242
* :class:`~torchvision.datasets.VOCDetection`: The key-value-pairs ``"boxes"`` and ``"labels"`` are added to
4343
the target and wrap the data in the corresponding ``torchvision.datapoints``. The original keys are
44-
preserved. If ``target_keys`` is ommitted, returns only the values for the ``"boxes"`` and ``"labels"``.
44+
preserved. If ``target_keys`` is omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
4545
* :class:`~torchvision.datasets.CelebA`: The target for ``target_type="bbox"`` is converted to the ``XYXY``
4646
coordinate format and wrapped into a :class:`~torchvision.datapoints.BoundingBoxes` datapoint.
4747
* :class:`~torchvision.datasets.Kitti`: Instead returning the target as list of dicts, the wrapper returns a
4848
dict of lists. In addition, the key-value-pairs ``"boxes"`` and ``"labels"`` are added and wrap the data
4949
in the corresponding ``torchvision.datapoints``. The original keys are preserved. If ``target_keys`` is
50-
ommitted, returns only the values for the ``"boxes"`` and ``"labels"``.
50+
omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
5151
* :class:`~torchvision.datasets.OxfordIIITPet`: The target for ``target_type="segmentation"`` is wrapped into a
5252
:class:`~torchvision.datapoints.Mask` datapoint.
5353
* :class:`~torchvision.datasets.Cityscapes`: The target for ``target_type="semantic"`` is wrapped into a

torchvision/datasets/_stereo_matching.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -796,7 +796,7 @@ def _read_disparity(self, file_path: str) -> Tuple[np.ndarray, None]:
796796
# in order to extract disparity from depth maps
797797
camera_settings_path = Path(file_path).parent / "_camera_settings.json"
798798
with open(camera_settings_path, "r") as f:
799-
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constatnt)
799+
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constant)
800800
intrinsics = json.load(f)
801801
focal = intrinsics["camera_settings"][0]["intrinsic_settings"]["fx"]
802802
baseline, pixel_constant = 6, 100 # pixel constant is inverted

torchvision/io/video_reader.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,14 +91,14 @@ class VideoReader:
9191
9292
Each stream descriptor consists of two parts: stream type (e.g. 'video') and
9393
a unique stream id (which are determined by the video encoding).
94-
In this way, if the video contaner contains multiple
94+
In this way, if the video container contains multiple
9595
streams of the same type, users can access the one they want.
9696
If only stream type is passed, the decoder auto-detects first stream of that type.
9797
9898
Args:
9999
src (string, bytes object, or tensor): The media source.
100100
If string-type, it must be a file path supported by FFMPEG.
101-
If bytes should be an in memory representatin of a file supported by FFMPEG.
101+
If bytes, should be an in-memory representation of a file supported by FFMPEG.
102102
If Tensor, it is interpreted internally as byte buffer.
103103
It must be one-dimensional, of type ``torch.uint8``.
104104
@@ -279,7 +279,7 @@ def set_current_stream(self, stream: str) -> bool:
279279
Currently available stream types include ``['video', 'audio']``.
280280
Each descriptor consists of two parts: stream type (e.g. 'video') and
281281
a unique stream id (which are determined by video encoding).
282-
In this way, if the video contaner contains multiple
282+
In this way, if the video container contains multiple
283283
streams of the same type, users can access the one they want.
284284
If only stream type is passed, the decoder auto-detects first stream
285285
of that type and returns it.

torchvision/transforms/v2/_geometry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1023,7 +1023,7 @@ class ElasticTransform(Transform):
10231023
10241024
.. note::
10251025
Implementation to transform bounding boxes is approximative (not exact).
1026-
We construct an approximation of the inverse grid as ``inverse_grid = idenity - displacement``.
1026+
We construct an approximation of the inverse grid as ``inverse_grid = identity - displacement``.
10271027
This is not an exact inverse of the grid used to transform images, i.e. ``grid = identity + displacement``.
10281028
Our assumption is that ``displacement * displacement`` is small and can be ignored.
10291029
Large displacements would lead to large errors in the approximation.

0 commit comments

Comments
 (0)