Skip to content

Fix typos in docstrings #7858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions cmake/iOS.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@
# SIMULATOR - used to build for the Simulator platforms, which have an x86 arch.
#
# CMAKE_IOS_DEVELOPER_ROOT = automatic(default) or /path/to/platform/Developer folder
# By default this location is automatcially chosen based on the IOS_PLATFORM value above.
# By default this location is automatically chosen based on the IOS_PLATFORM value above.
# If set manually, it will override the default location and force the user of a particular Developer Platform
#
# CMAKE_IOS_SDK_ROOT = automatic(default) or /path/to/platform/Developer/SDKs/SDK folder
# By default this location is automatcially chosen based on the CMAKE_IOS_DEVELOPER_ROOT value.
# By default this location is automatically chosen based on the CMAKE_IOS_DEVELOPER_ROOT value.
# In this case it will always be the most up-to-date SDK found in the CMAKE_IOS_DEVELOPER_ROOT path.
# If set manually, this will force the use of a specific SDK version

Expand Down Expand Up @@ -100,7 +100,7 @@ if(IOS_DEPLOYMENT_TARGET)
set(XCODE_IOS_PLATFORM_VERSION_FLAGS "-m${XCODE_IOS_PLATFORM}-version-min=${IOS_DEPLOYMENT_TARGET}")
endif()

# Hidden visibilty is required for cxx on iOS
# Hidden visibility is required for cxx on iOS
set(CMAKE_C_FLAGS_INIT "${XCODE_IOS_PLATFORM_VERSION_FLAGS}")
set(CMAKE_CXX_FLAGS_INIT "${XCODE_IOS_PLATFORM_VERSION_FLAGS} -fvisibility-inlines-hidden")

Expand Down
2 changes: 1 addition & 1 deletion docs/source/models/fcos.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Model builders
--------------

The following model builders can be used to instantiate a FCOS model, with or
without pre-trained weights. All the model buidlers internally rely on the
without pre-trained weights. All the model builders internally rely on the
``torchvision.models.detection.fcos.FCOS`` base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/fcos.py>`_ for
more details about this class.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/models/retinanet.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Model builders
--------------

The following model builders can be used to instantiate a RetinaNet model, with or
without pre-trained weights. All the model buidlers internally rely on the
without pre-trained weights. All the model builders internally rely on the
``torchvision.models.detection.retinanet.RetinaNet`` base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/detection/retinanet.py>`_ for
more details about this class.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/models/vgg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Model builders
--------------

The following model builders can be used to instantiate a VGG model, with or
without pre-trained weights. All the model buidlers internally rely on the
without pre-trained weights. All the model builders internally rely on the
``torchvision.models.vgg.VGG`` base class. Please refer to the `source code
<https://github.com/pytorch/vision/blob/main/torchvision/models/vgg.py>`_ for
more details about this class.
Expand Down
4 changes: 2 additions & 2 deletions gallery/others/plot_optical_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ def preprocess(img1_batch, img2_batch):
# (N, 2, H, W) batch of predicted flows that corresponds to a given "iteration"
# in the model. For more details on the iterative nature of the model, please
# refer to the `original paper <https://arxiv.org/abs/2003.12039>`_. Here, we
# are only interested in the final predicted flows (they are the most acccurate
# are only interested in the final predicted flows (they are the most accurate
# ones), so we will just retrieve the last item in the list.
#
# As described above, a flow is a tensor with dimensions (2, H, W) (or (N, 2, H,
Expand All @@ -151,7 +151,7 @@ def preprocess(img1_batch, img2_batch):
# %%
# Visualizing predicted flows
# ---------------------------
# Torchvision provides the :func:`~torchvision.utils.flow_to_image` utlity to
# Torchvision provides the :func:`~torchvision.utils.flow_to_image` utility to
# convert a flow into an RGB image. It also supports batches of flows.
# each "direction" in the flow will be mapped to a given RGB color. In the
# images below, pixels with similar colors are assumed by the model to be moving
Expand Down
2 changes: 1 addition & 1 deletion gallery/v2_transforms/plot_custom_transforms.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ def forward(self, img, bboxes, label): # we assume inputs are always structured
# In the section above, we have assumed that you already know the structure of
# your inputs and that you're OK with hard-coding this expected structure in
# your code. If you want your custom transforms to be as flexible as possible,
# this can be a bit limitting.
# this can be a bit limiting.
#
# A key feature of the builtin Torchvision V2 transforms is that they can accept
# arbitrary input structure and return the same structure as output (with
Expand Down
2 changes: 1 addition & 1 deletion test/test_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -1037,7 +1037,7 @@ def test_raft(model_fn, scripted):
torch.manual_seed(0)

# We need very small images, otherwise the pickle size would exceed the 50KB
# As a resut we need to override the correlation pyramid to not downsample
# As a result we need to override the correlation pyramid to not downsample
# too much, otherwise we would get nan values (effective H and W would be
# reduced to 1)
corr_block = models.optical_flow.raft.CorrBlock(num_levels=2, radius=2)
Expand Down
6 changes: 3 additions & 3 deletions torchvision/datapoints/_dataset_wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,17 +37,17 @@ def wrap_dataset_for_transforms_v2(dataset, target_keys=None):
* :class:`~torchvision.datasets.CocoDetection`: Instead of returning the target as list of dicts, the wrapper
returns a dict of lists. In addition, the key-value-pairs ``"boxes"`` (in ``XYXY`` coordinate format),
``"masks"`` and ``"labels"`` are added and wrap the data in the corresponding ``torchvision.datapoints``.
The original keys are preserved. If ``target_keys`` is ommitted, returns only the values for the
The original keys are preserved. If ``target_keys`` is omitted, returns only the values for the
``"image_id"``, ``"boxes"``, and ``"labels"``.
* :class:`~torchvision.datasets.VOCDetection`: The key-value-pairs ``"boxes"`` and ``"labels"`` are added to
the target and wrap the data in the corresponding ``torchvision.datapoints``. The original keys are
preserved. If ``target_keys`` is ommitted, returns only the values for the ``"boxes"`` and ``"labels"``.
preserved. If ``target_keys`` is omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
* :class:`~torchvision.datasets.CelebA`: The target for ``target_type="bbox"`` is converted to the ``XYXY``
coordinate format and wrapped into a :class:`~torchvision.datapoints.BoundingBoxes` datapoint.
* :class:`~torchvision.datasets.Kitti`: Instead returning the target as list of dicts, the wrapper returns a
dict of lists. In addition, the key-value-pairs ``"boxes"`` and ``"labels"`` are added and wrap the data
in the corresponding ``torchvision.datapoints``. The original keys are preserved. If ``target_keys`` is
ommitted, returns only the values for the ``"boxes"`` and ``"labels"``.
omitted, returns only the values for the ``"boxes"`` and ``"labels"``.
* :class:`~torchvision.datasets.OxfordIIITPet`: The target for ``target_type="segmentation"`` is wrapped into a
:class:`~torchvision.datapoints.Mask` datapoint.
* :class:`~torchvision.datasets.Cityscapes`: The target for ``target_type="semantic"`` is wrapped into a
Expand Down
2 changes: 1 addition & 1 deletion torchvision/datasets/_stereo_matching.py
Original file line number Diff line number Diff line change
Expand Up @@ -796,7 +796,7 @@ def _read_disparity(self, file_path: str) -> Tuple[np.ndarray, None]:
# in order to extract disparity from depth maps
camera_settings_path = Path(file_path).parent / "_camera_settings.json"
with open(camera_settings_path, "r") as f:
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constatnt)
# inverse of depth-from-disparity equation: depth = (baseline * focal) / (disparity * pixel_constant)
intrinsics = json.load(f)
focal = intrinsics["camera_settings"][0]["intrinsic_settings"]["fx"]
baseline, pixel_constant = 6, 100 # pixel constant is inverted
Expand Down
6 changes: 3 additions & 3 deletions torchvision/io/video_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,14 +91,14 @@ class VideoReader:

Each stream descriptor consists of two parts: stream type (e.g. 'video') and
a unique stream id (which are determined by the video encoding).
In this way, if the video contaner contains multiple
In this way, if the video container contains multiple
streams of the same type, users can access the one they want.
If only stream type is passed, the decoder auto-detects first stream of that type.

Args:
src (string, bytes object, or tensor): The media source.
If string-type, it must be a file path supported by FFMPEG.
If bytes should be an in memory representatin of a file supported by FFMPEG.
If bytes, should be an in-memory representation of a file supported by FFMPEG.
If Tensor, it is interpreted internally as byte buffer.
It must be one-dimensional, of type ``torch.uint8``.

Expand Down Expand Up @@ -279,7 +279,7 @@ def set_current_stream(self, stream: str) -> bool:
Currently available stream types include ``['video', 'audio']``.
Each descriptor consists of two parts: stream type (e.g. 'video') and
a unique stream id (which are determined by video encoding).
In this way, if the video contaner contains multiple
In this way, if the video container contains multiple
streams of the same type, users can access the one they want.
If only stream type is passed, the decoder auto-detects first stream
of that type and returns it.
Expand Down
2 changes: 1 addition & 1 deletion torchvision/transforms/v2/_geometry.py
Original file line number Diff line number Diff line change
Expand Up @@ -1023,7 +1023,7 @@ class ElasticTransform(Transform):

.. note::
Implementation to transform bounding boxes is approximative (not exact).
We construct an approximation of the inverse grid as ``inverse_grid = idenity - displacement``.
We construct an approximation of the inverse grid as ``inverse_grid = identity - displacement``.
This is not an exact inverse of the grid used to transform images, i.e. ``grid = identity + displacement``.
Our assumption is that ``displacement * displacement`` is small and can be ignored.
Large displacements would lead to large errors in the approximation.
Expand Down