Skip to content

Commit 6bac408

Browse files
gwang111Raymond Liuruiliann666John BarbozaMalav Shastri
committed
fix: Model builder Final Fixes(#1369)
Co-authored-by: Raymond Liu <[email protected]> Co-authored-by: Ruilian Gao <[email protected]> Co-authored-by: John Barboza <[email protected]> Co-authored-by: Gary Wang <[email protected]> Co-authored-by: Malav Shastri <[email protected]> Co-authored-by: Keshav Chandak <[email protected]> Co-authored-by: Zuoyuan Huang <[email protected]> Co-authored-by: Ao Guo <[email protected]> Co-authored-by: Mufaddal Rohawala <[email protected]> Co-authored-by: Mike Schneider <[email protected]> Co-authored-by: Bhupendra Singh <[email protected]> Co-authored-by: ci <ci> Co-authored-by: Malav Shastri <[email protected]> Co-authored-by: evakravi <[email protected]> Co-authored-by: Keshav Chandak <[email protected]> Co-authored-by: Alexander Pivovarov <[email protected]> Co-authored-by: qidewenwhen <[email protected]> Co-authored-by: mariumof <[email protected]> Co-authored-by: matherit <[email protected]> Co-authored-by: amzn-choeric <[email protected]> Co-authored-by: Ao Guo <[email protected]> Co-authored-by: Sally Seok <[email protected]> Co-authored-by: Erick Benitez-Ramos <[email protected]> Co-authored-by: Qingzi-Lan <[email protected]> Co-authored-by: Sally Seok <[email protected]> Co-authored-by: Manu Seth <[email protected]> Co-authored-by: Miyoung <[email protected]> Co-authored-by: Sarah Castillo <[email protected]> Co-authored-by: EC2 Default User <[email protected]> Co-authored-by: EC2 Default User <[email protected]> Co-authored-by: EC2 Default User <[email protected]> Co-authored-by: Xin Wang <[email protected]> Co-authored-by: stacicho <[email protected]> Co-authored-by: martinRenou <[email protected]> Co-authored-by: jiapinw <[email protected]> Co-authored-by: Akash Goel <[email protected]> Co-authored-by: Joseph Zhang <[email protected]> Co-authored-by: Harsha Reddy <[email protected]> Co-authored-by: Haixin Wang <[email protected]> Co-authored-by: Kalyani Nikure <[email protected]> Co-authored-by: Xin Wang <[email protected]> Co-authored-by: Gili Nachum <[email protected]> Co-authored-by: Jose Pena <[email protected]> Co-authored-by: cansun <[email protected]> Co-authored-by: AWS-pratab <[email protected]> Co-authored-by: shenlongtang <[email protected]> Co-authored-by: Zach Kimberg <[email protected]> Co-authored-by: chrivtho-github <[email protected]> Co-authored-by: Justin <[email protected]> Co-authored-by: Duc Trung Le <[email protected]> Co-authored-by: HappyAmazonian <[email protected]> Co-authored-by: cj-zhang <[email protected]> Co-authored-by: Matthew <[email protected]> Co-authored-by: Zach Kimberg <[email protected]> Co-authored-by: Rohith Nadimpally <[email protected]> Co-authored-by: rohithn1 <[email protected]> Co-authored-by: Victor Zhu <[email protected]> Co-authored-by: Gary Wang <[email protected]> Co-authored-by: SSRraymond <[email protected]> Co-authored-by: jbarz1 <[email protected]> Co-authored-by: Mohan Gandhi <[email protected]> Co-authored-by: Mohan Gandhi <[email protected]> Co-authored-by: Barboza <[email protected]> Co-authored-by: ruiliann666 <[email protected]> Co-authored-by: Rohan Gujarathi <[email protected]> Co-authored-by: svia3 <[email protected]> Co-authored-by: Zhankui Lu <[email protected]> Co-authored-by: Dewen Qi <[email protected]> Co-authored-by: Edward Sun <[email protected]> Co-authored-by: Stephen Via <[email protected]> Co-authored-by: Namrata Madan <[email protected]> Co-authored-by: Stacia Choe <[email protected]> Co-authored-by: Edward Sun <[email protected]> Co-authored-by: Edward Sun <[email protected]> Co-authored-by: Rohan Gujarathi <[email protected]> Co-authored-by: JohnaAtAWS <[email protected]> Co-authored-by: Vera Yu <[email protected]> Co-authored-by: bhaoz <[email protected]> Co-authored-by: Qing Lan <[email protected]> Co-authored-by: Namrata Madan <[email protected]> Co-authored-by: Sirut Buasai <[email protected]> Co-authored-by: wayneyao <[email protected]> Co-authored-by: Jacky Lee <[email protected]> Co-authored-by: haNa-meister <[email protected]> Co-authored-by: Shailav <[email protected]> Fix unit tests (#1018) Fix happy hf test (#1026) fix logic setup (#1034) fixes (#1045) Fix flake error in init (#1050) fix (#1053) fix: skip tensorflow local mode notebook test (#4060) fix: tags for jumpstart model package models (#4061) fix: pipeline variable kms key (#4065) fix: jumpstart cache using sagemaker session s3 client (#4051) fix: gated models unsupported region (#4069) fix: pipeline upsert failed to pass parallelism_config to update (#4066) fix: temporarily skip kmeans notebook (#4092) fixes (#1051) Fix missing absolute import error (#1057) Fix flake8 error in unit test (#1058) fixes (#1056) Fix flake8 error in integ test (#1060) Fix black format error in test_pickle_dependencies (#1062) Fix docstyle error under serve (#1065) Fix docstyle error in builder failure (#1066) fix black and flake8 formatting (#1069) Fix format error (#1070) Fix integ test (#1074) fix: HuggingFaceProcessor parameterized instance_type when image_uri is absent (#4072) fix: log message when sdk defaults not applied (#4104) fix: handle bad jumpstart default session (#4109) Fix the version information, whl and flake8 (#1085) Fix JSON serializer error (#1088) Fix unit test (#1091) fix format (#1103) Fix local mode predictor (#1107) Fix DJLPredictor (#1108) Fix modelbuilder unit tests (#1118) fixes (#1136) fixes (#1165) fixes (#1166) fix: auto ml integ tests and add flaky test markers (#4136) fix model data for JumpStartModel (#4135) fix: transform step unit test (#4151) fix: Update pipeline.py and selective_execution_config.py with small fixes (#1099) fix: Fixed bug in _create_training_details (#4141) fix: use correct line endings and s3 uris on windows (#4118) fix: js tagging s3 prefix (#4167) fix: Update Ec2 instance type to g5.4xlarge in test_huggingface_torch_distributed.py (#4181) fix: import error in unsupported js regions (#4188) fix: update local mode schema (#4185) fix: fix flaky Inference Recommender integration tests (#4156) fix: clone distribution in validate_distribution (#4205) Fix hyperlinks in feature_processor.scheduler parameter descriptions (#4208) Fix master merge formatting (#1186) Fix master unit tests (#1203) Fix djl unit tests (#1204) Fix merge conflicts (#1217) fix: fix URL links (#4217) fix: bump urllib3 version (#4223) fix: relax upper bound on urllib in local mode requirements (#4219) fixes (#1224) fix formatting (#1233) fix byoc unit tests (#1235) fix byoc unit tests (#1236) Fixed Modelpackage's deploy calling model's deploy (#1155) fix: jumpstart unit-test (#1265) fixes (#963) Fix TorchTensorSer/Deser (#969) fix (#971) fix local container mode (#972) Fix auto detect (#979) Fix routing fn (#981) fix local container serialization (#989) fix custom serialiazation with local container. Also remove a lot of unused code (#994) Fix custom serialization for local container mode (#1000) fix pytorch version (#1001) Fix unit test (#990) fix: Multiple bug fixes including removing unsupported feature. (#1105) Fix some problems with pipeline compilation (#1125) fix: Refactor JsonGet s3 URI and add serialize_output_to_json flag (#1164) fix: invoke_function circular import (#1262) fix: pylint (#1264) fix: Add logging for docker build failures (#1267) Fix session bug when provided in ModelBuilder (#1288) fixes (#1313) fix: Gated content bucket env var override (#1280) fix: Change the library used in pytorch test causing cloudpickle version conflict (#1287) fix: HMAC signing for ModelBuilder Triton python backend (#1282) fix: do not delete temp folder generated by sdist (#1291) fix: Do not require model_server if provided image_uri is a 1p image. (#1303) fix: check image type vs instance type (#1307) fix: unit test (#1315) fix: Fixed model builder's register unable to deploy (#1323) fix: missing `self._framework` in `InferenceSpec` path (#1325) fix: enable xgboost integ test in our own pipeline (#1326) fix: skip py310 (#1328) fix: Update autodetect dlc logic (#1329) Fix secret key in the Model object (#1334) fix: improve error message (#1333) Fix unit testing (#1340) fix: Typing and formatting (#1341) fix: WaiterError on failed pipeline execution. results() (#1337) Fix tox identified errors (#1344) Fix issue when the user runs in Python 3.11 (#1345) fixes (#1346) fix: use copy instead of move in bootstrap script (#1339) Resolve keynote3 conflicts (#1351) Resolve keynote3 conflicts v2 (#1353) Fix conflicts (#1354) Fix conflicts v3 (#1355) fix: get whl from local to run integ tests (#1357) fix: enable triton pt tests (#1358) fix: integ test (#1362) Fix Python 3.11 issue with dataclass decorator (#1345) fix: remote function include_local_workdir default value (#1342) fix: error message (#1373) fixes (#1372) fix: Remvoe PickleSerializer (#1378)
1 parent c0a5671 commit 6bac408

File tree

71 files changed

+2403
-485
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

71 files changed

+2403
-485
lines changed

requirements/extras/test_requirements.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,4 +36,6 @@ transformers==4.32.0
3636
sentencepiece==0.1.99
3737
# https://github.com/triton-inference-server/server/issues/6246
3838
tritonclient[http]<2.37.0
39+
onnx==1.14.1
40+
# tf2onnx==1.15.1
3941
nbformat>=5.9,<6

setup.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,7 @@ def read_requirements(filename):
7070
"requests",
7171
"docker",
7272
"tqdm",
73+
"psutil",
7374
]
7475

7576
# Specific use case dependencies
@@ -106,6 +107,7 @@ def read_requirements(filename):
106107
description="Open source library for training and deploying models on Amazon SageMaker.",
107108
packages=find_packages("src"),
108109
package_dir={"": "src"},
110+
package_data={"": ["*.whl"]},
109111
py_modules=[os.path.splitext(os.path.basename(path))[0] for path in glob("src/*.py")],
110112
include_package_data=True,
111113
long_description=read("README.rst"),

src/sagemaker/base_deserializers.py

Lines changed: 0 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,6 @@
2222

2323
import numpy as np
2424
from six import with_metaclass
25-
import cloudpickle
2625

2726
from sagemaker.utils import DeferredError
2827

@@ -378,36 +377,3 @@ def deserialize(self, stream, content_type="tensor/pt"):
378377
"Unable to deserialize your data to torch.Tensor.\
379378
Please provide custom deserializer in InferenceSpec."
380379
)
381-
382-
383-
class PickleDeserializer(SimpleBaseDeserializer):
384-
"""Deserialize stream to object using cloudpickle module.
385-
386-
Args:
387-
stream (botocore.response.StreamingBody): Data to be deserialized.
388-
content_type (str): The MIME type of the data.
389-
390-
Returns:
391-
object: The data deserialized into a torch Tensor.
392-
"""
393-
394-
def __init__(self, accept="application/x-pkl"):
395-
super(PickleDeserializer, self).__init__(accept)
396-
397-
def deserialize(self, stream, content_type="application/x-pkl"):
398-
"""Deserialize pickle data from an inference endpoint.
399-
400-
Args:
401-
stream (botocore.response.StreamingBody): Data to be deserialized.
402-
content_type (str): The MIME type of the data.
403-
404-
Returns:
405-
list: A list of piclke serializable objects.
406-
"""
407-
try:
408-
return cloudpickle.loads(stream.read())
409-
except Exception:
410-
raise ValueError(
411-
"Cannot deserialize bytes to object with cloudpickle.\
412-
Please provide custom deserializer."
413-
)

src/sagemaker/base_serializers.py

Lines changed: 0 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,6 @@
2121
import numpy as np
2222
from pandas import DataFrame
2323
from six import with_metaclass
24-
import cloudpickle
2524

2625
from sagemaker.utils import DeferredError
2726

@@ -465,27 +464,3 @@ def serialize(self, data):
465464
)
466465

467466
raise ValueError("Object of type %s is not a torch.Tensor" % type(data))
468-
469-
470-
class PickleSerializer(SimpleBaseSerializer):
471-
"""Serialize an arbitrary object using cloudpickle module."""
472-
473-
def __init__(self, content_type="application/x-pkl"):
474-
super(PickleSerializer, self).__init__(content_type)
475-
476-
def serialize(self, data: object) -> bytes:
477-
"""Serialize an arbitrary object using cloudpickle module.
478-
479-
Args:
480-
data (object): Data to be serialized. The data must be of torch.Tensor type.
481-
Returns:
482-
raw-bytes: The data serialized as raw-bytes from the input.
483-
"""
484-
try:
485-
return cloudpickle.dumps(data)
486-
except Exception:
487-
raise ValueError(
488-
"Cannot serialize your object of type %s into bytes with cloudpickle.\
489-
Please provide custom serializer."
490-
% type(data)
491-
)

src/sagemaker/deserializers.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,6 @@
3131
StreamDeserializer,
3232
StringDeserializer,
3333
TorchTensorDeserializer,
34-
PickleDeserializer,
3534
)
3635

3736
from sagemaker.jumpstart import artifacts, utils as jumpstart_utils

src/sagemaker/djl_inference/model.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,11 @@ def _get_model_config_properties_from_hf(model_id: str, hf_hub_token: str = None
180180
model_config = json.load(response)
181181
break
182182
except (HTTPError, URLError, TimeoutError, JSONDecodeError) as e:
183+
if "HTTP Error 401: Unauthorized" in str(e):
184+
raise ValueError(
185+
"Trying to access a gated/private HuggingFace model without valid credentials. "
186+
"Please provide a HUGGING_FACE_HUB_TOKEN in env_vars"
187+
)
183188
logger.warning(
184189
"Exception encountered while trying to read config file %s. " "Details: %s",
185190
config_file_url,

src/sagemaker/huggingface/model.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -326,6 +326,7 @@ def deploy(
326326
container_startup_health_check_timeout=container_startup_health_check_timeout,
327327
inference_recommendation_id=inference_recommendation_id,
328328
explainer_config=explainer_config,
329+
endpoint_logging=kwargs.get("endpoint_logging", False),
329330
)
330331

331332
def register(

src/sagemaker/serializers.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@
2929
SimpleBaseSerializer,
3030
SparseMatrixSerializer,
3131
TorchTensorSerializer,
32-
PickleSerializer,
3332
StringSerializer,
3433
)
3534

src/sagemaker/serve/builder/djl_builder.py

Lines changed: 21 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,15 @@
1+
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License"). You
4+
# may not use this file except in compliance with the License. A copy of
5+
# the License is located at
6+
#
7+
# http://aws.amazon.com/apache2.0/
8+
#
9+
# or in the "license" file accompanying this file. This file is
10+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11+
# ANY KIND, either express or implied. See the License for the specific
12+
# language governing permissions and limitations under the License.
113
"""Holds mixin logic to support deployment of Model ID"""
214
from __future__ import absolute_import
315
import logging
@@ -26,7 +38,10 @@
2638
_get_admissible_dtypes,
2739
)
2840
from sagemaker.serve.utils.local_hardware import _get_nb_instance, _get_ram_usage_mb
29-
from sagemaker.serve.model_server.djl_serving.prepare import prepare_for_djl_serving
41+
from sagemaker.serve.model_server.djl_serving.prepare import (
42+
prepare_for_djl_serving,
43+
_create_dir_structure,
44+
)
3045
from sagemaker.serve.utils.predictors import DjlLocalModePredictor
3146
from sagemaker.serve.utils.types import ModelServer, _DjlEngine
3247
from sagemaker.serve.mode.function_pointers import Mode
@@ -40,7 +55,6 @@
4055

4156
logger = logging.getLogger(__name__)
4257

43-
_JUMP_START_HUGGING_FACE_PREFIX = "huggingface"
4458
# Match JumpStart DJL entrypoint format
4559
_DJL_MODEL_BUILDER_ENTRY_POINT = "inference.py"
4660
_CODE_FOLDER = "code"
@@ -86,16 +100,9 @@ def _prepare_for_mode(self):
86100
def _get_client_translators(self):
87101
"""Placeholder docstring"""
88102

89-
def _validate_model_server(self):
103+
def _is_djl(self):
90104
"""Placeholder docstring"""
91-
if self.model_server != ModelServer.DJL_SERVING:
92-
messaging = (
93-
"HuggingFace Model ID support on model server: "
94-
f"{self.model_server} is not currently supported. "
95-
f"Defaulting to {ModelServer.DJL_SERVING}"
96-
)
97-
logger.warning(messaging)
98-
self.model_server = ModelServer.DJL_SERVING
105+
return self.model_server == ModelServer.DJL_SERVING
99106

100107
def _validate_djl_serving_sample_data(self):
101108
"""Placeholder docstring"""
@@ -112,12 +119,6 @@ def _validate_djl_serving_sample_data(self):
112119
):
113120
raise ValueError(_INVALID_SAMPLE_DATA_EX)
114121

115-
def _is_jumpstart_model_id(self) -> bool:
116-
"""Placeholder docstring"""
117-
# this will potentially extend in the future so leave like this
118-
# for now, only hf jumpstart model ids will be considered
119-
return self.model.startswith(_JUMP_START_HUGGING_FACE_PREFIX)
120-
121122
def _create_djl_model(self) -> Type[Model]:
122123
"""Placeholder docstring"""
123124
code_dir = str(Path(self.model_path).joinpath(_CODE_FOLDER))
@@ -211,9 +212,6 @@ def _djl_model_builder_deploy_wrapper(self, *args, **kwargs) -> Type[PredictorBa
211212
ram_usage_after = _get_ram_usage_mb()
212213

213214
self.ram_usage_model_load = max(ram_usage_after - ram_usage_before, 0)
214-
logger.info(
215-
"RAM used to load the %s locally was %s MB", self.model, self.ram_usage_model_load
216-
)
217215

218216
return predictor
219217

@@ -237,7 +235,8 @@ def _djl_model_builder_deploy_wrapper(self, *args, **kwargs) -> Type[PredictorBa
237235
self.pysdk_model.env["TRANSFORMERS_CACHE"] = "/tmp"
238236
self.pysdk_model.env["HUGGINGFACE_HUB_CACHE"] = "/tmp"
239237

240-
kwargs["endpoint_logging"] = True
238+
if "endpoint_logging" not in kwargs:
239+
kwargs["endpoint_logging"] = True
241240
if self.nb_instance_type and "instance_type" not in kwargs:
242241
kwargs.update({"instance_type": self.nb_instance_type})
243242

@@ -253,6 +252,7 @@ def _build_for_hf_djl(self):
253252
"""Placeholder docstring"""
254253
self.overwrite_props_from_file = True
255254
self.nb_instance_type = _get_nb_instance()
255+
_create_dir_structure(self.model_path)
256256
self.engine, self.hf_model_config = _auto_detect_engine(
257257
self.model, self.env_vars.get("HUGGING_FACE_HUB_TOKEN")
258258
)
@@ -463,7 +463,6 @@ def _tune_for_hf_djl(self, max_tuning_duration: int = 1800):
463463

464464
def _build_for_djl(self):
465465
"""Placeholder docstring"""
466-
self._validate_model_server()
467466
self._validate_djl_serving_sample_data()
468467
self.secret_key = None
469468

src/sagemaker/serve/builder/jumpstart_builder.py

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,15 @@
1+
# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License"). You
4+
# may not use this file except in compliance with the License. A copy of
5+
# the License is located at
6+
#
7+
# http://aws.amazon.com/apache2.0/
8+
#
9+
# or in the "license" file accompanying this file. This file is
10+
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11+
# ANY KIND, either express or implied. See the License for the specific
12+
# language governing permissions and limitations under the License.
113
"""Placeholder docstring"""
214
from __future__ import absolute_import
315

@@ -8,7 +20,7 @@
820
from sagemaker.model import Model
921
from sagemaker import model_uris
1022
from sagemaker.serve.model_server.djl_serving.prepare import prepare_djl_js_resources
11-
from sagemaker.serve.model_server.tgi.prepare import prepare_tgi_js_resources
23+
from sagemaker.serve.model_server.tgi.prepare import prepare_tgi_js_resources, _create_dir_structure
1224
from sagemaker.serve.mode.function_pointers import Mode
1325
from sagemaker.serve.utils.predictors import (
1426
DjlLocalModePredictor,
@@ -20,7 +32,6 @@
2032
from sagemaker.base_predictor import PredictorBase
2133
from sagemaker.jumpstart.model import JumpStartModel
2234

23-
_JUMP_START_HUGGING_FACE_PREFIX = "huggingface"
2435
_DJL_MODEL_BUILDER_ENTRY_POINT = "inference.py"
2536
_NO_JS_MODEL_EX = "HuggingFace JumpStart Model ID not detected. Building for HuggingFace Model ID."
2637
_JS_SCOPE = "inference"
@@ -60,6 +71,7 @@ def __init__(self):
6071
self.schema_builder = None
6172
self.nb_instance_type = None
6273
self.ram_usage_model_load = None
74+
self.jumpstart = None
6375

6476
@abstractmethod
6577
def _prepare_for_mode(self):
@@ -77,10 +89,6 @@ def _is_jumpstart_model_id(self) -> bool:
7789
logger.warning(_NO_JS_MODEL_EX)
7890
return False
7991

80-
if not self.model.startswith(_JUMP_START_HUGGING_FACE_PREFIX):
81-
logger.warning(_NO_JS_MODEL_EX)
82-
return False
83-
8492
logger.info("JumpStart Model ID detected.")
8593
return True
8694

@@ -156,17 +164,16 @@ def _js_builder_deploy_wrapper(self, *args, **kwargs) -> Type[PredictorBase]:
156164
None,
157165
predictor,
158166
self.pysdk_model.env,
167+
jumpstart=True,
159168
)
160169
ram_usage_after = _get_ram_usage_mb()
161170

162171
self.ram_usage_model_load = max(ram_usage_after - ram_usage_before, 0)
163-
logger.info(
164-
"RAM used to load the %s locally was %s MB", self.model, self.ram_usage_model_load
165-
)
166172

167173
return predictor
168174

169-
kwargs["endpoint_logging"] = True
175+
if "endpoint_logging" not in kwargs:
176+
kwargs["endpoint_logging"] = True
170177
if hasattr(self, "nb_instance_type"):
171178
kwargs.update({"instance_type": self.nb_instance_type})
172179

@@ -186,6 +193,7 @@ def _build_for_djl_jumpstart(self):
186193
"""Placeholder docstring"""
187194

188195
env = {}
196+
_create_dir_structure(self.model_path)
189197
if self.mode == Mode.LOCAL_CONTAINER:
190198
if not hasattr(self, "prepared_for_djl"):
191199
(
@@ -227,6 +235,7 @@ def _build_for_jumpstart(self):
227235
"""Placeholder docstring"""
228236
# we do not pickle for jumpstart. set to none
229237
self.secret_key = None
238+
self.jumpstart = True
230239

231240
pysdk_model = self._create_pre_trained_js_model()
232241

0 commit comments

Comments
 (0)