Skip to content

Commit f6c6684

Browse files
committed
fix errors
1 parent 8371fce commit f6c6684

File tree

10 files changed

+96
-100
lines changed

10 files changed

+96
-100
lines changed

doc/amazon_sagemaker_operators_for_kubernetes.rst

Lines changed: 79 additions & 79 deletions
Large diffs are not rendered by default.

doc/analytics.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@ Analytics
1919
.. autoclass:: sagemaker.analytics.ExperimentAnalytics
2020
:members:
2121
:undoc-members:
22-
:show-inheritance:
22+
:show-inheritance:

doc/predictors.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ Make real-time predictions against SageMaker endpoints with Python objects
66
.. autoclass:: sagemaker.predictor.RealTimePredictor
77
:members:
88
:undoc-members:
9-
:show-inheritance:
9+
:show-inheritance:

doc/using_xgboost.rst

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The XGBoost open source algorithm provides the following benefits over the built
1717
* Latest version - The open source XGBoost algorithm typically supports a more recent version of XGBoost.
1818
To see the XGBoost version that is currently supported,
1919
see `XGBoost SageMaker Estimators and Models <https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/xgboost#xgboost-sagemaker-estimators-and-models>`__.
20-
* Flexibility - Take advantage of the full range of XGBoost functionality, such as cross-validation support.
20+
* Flexibility - Take advantage of the full range of XGBoost functionality, such as cross-validation support.
2121
You can add custom pre- and post-processing logic and run additional code after training.
2222
* Scalability - The XGBoost open source algorithm has a more efficient implementation of distributed training,
2323
which enables it to scale out to more instances and reduce out-of-memory errors.
@@ -100,14 +100,14 @@ such as the location of input data and location where we want to save the model.
100100
parser.add_argument('--max_depth', type=int, default=5)
101101
parser.add_argument('--eta', type=float, default=0.2)
102102
parser.add_argument('--objective', type=str, default='reg:squarederror')
103-
103+
104104
# SageMaker specific arguments. Defaults are set in the environment variables.
105105
parser.add_argument('--model_dir', type=str, default=os.environ.get('SM_MODEL_DIR'))
106106
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
107107
parser.add_argument('--validation', type=str, default=os.environ['SM_CHANNEL_VALIDATION'])
108-
108+
109109
args = parser.parse_args()
110-
110+
111111
train_hp = {
112112
'max_depth': args.max_depth,
113113
'eta': args.eta,
@@ -117,15 +117,15 @@ such as the location of input data and location where we want to save the model.
117117
'silent': args.silent,
118118
'objective': args.objective
119119
}
120-
120+
121121
dtrain = xgb.DMatrix(args.train)
122122
dval = xgb.DMatrix(args.validation)
123123
watchlist = [(dtrain, 'train'), (dval, 'validation')] if dval is not None else [(dtrain, 'train')]
124124
125125
callbacks = []
126126
prev_checkpoint, n_iterations_prev_run = add_checkpointing(callbacks)
127127
# If checkpoint is found then we reduce num_boost_round by previously run number of iterations
128-
128+
129129
bst = xgb.train(
130130
params=train_hp,
131131
dtrain=dtrain,
@@ -134,7 +134,7 @@ such as the location of input data and location where we want to save the model.
134134
xgb_model=prev_checkpoint,
135135
callbacks=callbacks
136136
)
137-
137+
138138
# Save the model to the location specified by ``model_dir``
139139
model_location = args.model_dir + '/xgboost-model'
140140
pkl.dump(bst, open(model_location, 'wb'))
@@ -154,7 +154,7 @@ and a dictionary of the hyperparameters to pass to the training script.
154154
xgb_estimator = XGBoost(
155155
entry_point="abalone.py",
156156
hyperparameters=hyperparameters,
157-
role=role,
157+
role=role,
158158
train_instance_count=1,
159159
train_instance_type="ml.m5.2xlarge",
160160
framework_version="0.90-1",
@@ -210,6 +210,3 @@ SageMaker XGBoost Docker Containers
210210
***********************************
211211

212212
For information about SageMaker XGBoost Docker container and its dependencies, see `SageMaker XGBoost Container <https://github.com/aws/sagemaker-xgboost-container>`_.
213-
214-
215-

doc/xgboost.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,4 @@ The Amazon SageMaker XGBoost open source framework algorithm.
1515
.. autoclass:: sagemaker.xgboost.model.XGBoostPredictor
1616
:members:
1717
:undoc-members:
18-
:show-inheritance:
18+
:show-inheritance:

src/sagemaker/chainer/README.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ You can visit the Chainer repository at https://github.com/chainer/chainer.
1111
For information about using Chainer with the SageMaker Python SDK, see https://sagemaker.readthedocs.io/en/stable/using_chainer.html.
1212

1313
Chainer Training Examples
14-
~~~~~~~~~~~~~~~~~~~~~~~
14+
~~~~~~~~~~~~~~~~~~~~~~~~~
1515

1616
Amazon provides several example Jupyter notebooks that demonstrate end-to-end training on Amazon SageMaker using Chainer.
1717
Please refer to:
@@ -22,7 +22,7 @@ These are also available in SageMaker Notebook Instance hosted Jupyter notebooks
2222

2323

2424
SageMaker Chainer Docker containers
25-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
25+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2626

2727
When training and deploying training scripts, SageMaker runs your Python script in a Docker container with several
2828
libraries installed. When creating the Estimator and calling deploy to create the SageMaker Endpoint, you can control

src/sagemaker/tensorflow/deploying_tensorflow_serving.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -262,8 +262,7 @@ Specifying the output of a prediction request
262262
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
263263

264264
The structure of the prediction ``result`` is determined at the end of the training process before SavedModel is created. For example, if
265-
you are using TensorFlow's Estimator API for training, you control inference outputs using the ``export_outputs`` parameter of the `tf.estimator.EstimatorSpec <https://www.tensorflow.org/api_docs/python/tf/estimator/EstimatorSpec>`_ that you return from
266-
your ``model_fn`` (see `Example of a complete model_fn`_ for an example of ``export_outputs``).
265+
you are using TensorFlow's Estimator API for training, you control inference outputs using the ``export_outputs`` parameter of the `tf.estimator.EstimatorSpec <https://www.tensorflow.org/api_docs/python/tf/estimator/EstimatorSpec>`_ that you return from your ``model_fn``.
267266

268267
More information on how to create ``export_outputs`` can be found in `specifying the outputs of a custom model <https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/docs_src/programmers_guide/saved_model.md#specifying-the-outputs-of-a-custom-model>`_. You can also
269268
refer to TensorFlow's `Save and Restore <https://www.tensorflow.org/guide/saved_model>`_ documentation for other ways to control the

src/sagemaker/workflow/README.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,4 +10,4 @@ is a platform that enables you to programmatically author, schedule, and monitor
1010
you can build a workflow for SageMaker training, hyperparameter tuning, batch transform and endpoint deployment.
1111
You can use any SageMaker deep learning framework or Amazon algorithms to perform above operations in Airflow.
1212

13-
For information about using SageMaker Workflow, see https://sagemaker.readthedocs.io/en/stable/using_workflow.html.
13+
For information about using SageMaker Workflow, see https://sagemaker.readthedocs.io/en/stable/using_workflow.html.
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
asset-file-contents
1+
asset-file-contents
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
asset-file-contents
1+
asset-file-contents

0 commit comments

Comments
 (0)