Skip to content

Commit 639b38c

Browse files
authored
Merge pull request #36 from oracle/model-artifact
added notebooks
2 parents a32defc + 2c2b4d3 commit 639b38c

File tree

3 files changed

+165
-126
lines changed

3 files changed

+165
-126
lines changed
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,25 @@
1+
## Notebook Examples
12

3+
The following notebook examples serve as tutorials to prepare and save different model artifacts and then deploy them as well.
4+
5+
1. [linear_regression_generic.ipynb](linear_regression_generic.ipynb)
6+
* This notebook will provide an example to prepare and save an sklearn model artifact using ADS generic method and deploy the model as an HTTP endpoint
7+
8+
1. [pytorch_pretrained.ipynb](pytorch_pretrained.ipynb)
9+
* This notebook will provide an example to prepare and save a pytorch model artifact using ADS, will publish a conda environment, and deploy the model as an HTTP endpoint.
10+
11+
1. [saving_model_oci_python_sdk.ipynb](saving_model_oci_python_sdk.ipynb)
12+
* This example notebook demonstrates creating and uploading a XGBoost binary logisitic-based model, with metadata and schema, to the model catalog v2.0.
13+
14+
1. [simple-model-deployment.ipynb](simple-model-deployment.ipynb)
15+
* This example notebook demonstrates to deploy a sklearn random forest classifier as an HTTP endpoint using Model Deployment.
16+
17+
1. [uploading_larger_artifact_oci_python_sdk.ipynb](uploading_larger_artifact_oci_python_sdk.ipynb)
18+
* This example notebook demonstrates simple solution for OCI Python SDK which allows data scientists to upload larger model artifacts and eliminate the timeout error that is experienced by most folks when the artifact is large. It shows end-to-end steps from setting up the configuration till uploading the model artifact.
19+
20+
1. [uploading_larger_artifact_oracle_ads.ipynb](uploading_larger_artifact_oracle_ads.ipynb)
21+
* This example notebook demonstrates simple solution for Oracle ADS Library which allows data scientists to upload larger model artifacts and eliminate the timeout error that is experienced by most folks when the artifact is large. It shows end-to-end steps from setting up the configuration till uploading the model artifact.
22+
23+
1. [xgboost_onnx.ipynb](xboost_onnx.ipynb)
24+
* This example notebook demonstrates how to prepare and save an xgboost model artifact using the ADSModel `prepare()` method and deploy the model as an HTTP endpoint.
25+

model_catalog_examples/artifact_boilerplate/timeout_solution_oci_python_sdk.ipynb renamed to model_catalog_examples/notebook_examples/uploading_larger_artifact_oci_python_sdk.ipynb

Lines changed: 95 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -2,64 +2,52 @@
22
"cells": [
33
{
44
"cell_type": "markdown",
5-
"id": "5f77a6ca",
5+
"id": "3a18e055",
66
"metadata": {},
77
"source": [
8-
"### OCI Data Science - Useful Tips\n",
9-
"<details>\n",
10-
"<summary><font size=\"2\">Check for Public Internet Access</font></summary>\n",
11-
"\n",
12-
"```python\n",
13-
"import requests\n",
14-
"response = requests.get(\"https://oracle.com\")\n",
15-
"assert response.status_code==200, \"Internet connection failed\"\n",
16-
"```\n",
17-
"</details>\n",
18-
"<details>\n",
19-
"<summary><font size=\"2\">Helpful Documentation </font></summary>\n",
20-
"<ul><li><a href=\"https://docs.cloud.oracle.com/en-us/iaas/data-science/using/data-science.htm\">Data Science Service Documentation</a></li>\n",
21-
"<li><a href=\"https://docs.cloud.oracle.com/iaas/tools/ads-sdk/latest/index.html\">ADS documentation</a></li>\n",
22-
"</ul>\n",
23-
"</details>\n",
24-
"<details>\n",
25-
"<summary><font size=\"2\">Typical Cell Imports and Settings for ADS</font></summary>\n",
26-
"\n",
27-
"```python\n",
28-
"%load_ext autoreload\n",
29-
"%autoreload 2\n",
30-
"%matplotlib inline\n",
31-
"\n",
32-
"import warnings\n",
33-
"warnings.filterwarnings('ignore')\n",
8+
"<font color=gray>Oracle Cloud Infrastructure Data Science Sample Notebook\n",
349
"\n",
35-
"import logging\n",
36-
"logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.ERROR)\n",
37-
"\n",
38-
"import ads\n",
39-
"from ads.dataset.factory import DatasetFactory\n",
40-
"from ads.automl.provider import OracleAutoMLProvider\n",
41-
"from ads.automl.driver import AutoML\n",
42-
"from ads.evaluations.evaluator import ADSEvaluator\n",
43-
"from ads.common.data import ADSData\n",
44-
"from ads.explanations.explainer import ADSExplainer\n",
45-
"from ads.explanations.mlx_global_explainer import MLXGlobalExplainer\n",
46-
"from ads.explanations.mlx_local_explainer import MLXLocalExplainer\n",
47-
"from ads.catalog.model import ModelCatalog\n",
48-
"from ads.common.model_artifact import ModelArtifact\n",
49-
"```\n",
50-
"</details>\n",
51-
"<details>\n",
52-
"<summary><font size=\"2\">Useful Environment Variables</font></summary>\n",
53-
"\n",
54-
"```python\n",
55-
"import os\n",
56-
"print(os.environ[\"NB_SESSION_COMPARTMENT_OCID\"])\n",
57-
"print(os.environ[\"PROJECT_OCID\"])\n",
58-
"print(os.environ[\"USER_OCID\"])\n",
59-
"print(os.environ[\"TENANCY_OCID\"])\n",
60-
"print(os.environ[\"NB_REGION\"])\n",
61-
"```\n",
62-
"</details>"
10+
"Copyright (c) 2021 Oracle, Inc. All rights reserved. <br>\n",
11+
"Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.\n",
12+
"</font>"
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"id": "59909c92",
18+
"metadata": {},
19+
"source": [
20+
"# Uploading Larger Size Model Artifact Using OCI Pyhton SDK \n",
21+
"\n",
22+
"This notebook demonstrates simple solution for OCI Python SDK which allows data scientists to upload larger model artifacts and eliminate the timeout error that is experienced by most folks when the artifact is large. It shows end-to-end steps from setting up the configuration till uploading the model artifact."
23+
]
24+
},
25+
{
26+
"cell_type": "markdown",
27+
"id": "96aa5b7b",
28+
"metadata": {},
29+
"source": [
30+
"## Pre-requisites to Running this Notebook "
31+
]
32+
},
33+
{
34+
"cell_type": "markdown",
35+
"id": "483c5bed",
36+
"metadata": {},
37+
"source": [
38+
"* We recommend that you run this notebook in a notebook session using the **Data Science Conda Environment \"Data Exploration and Manipulation for CPU Python 3.7 V2 conda environment\"** \n",
39+
"* You need access to the public internet\n",
40+
"* Upgrade the current version of the OCI Python SDK (`oci`): "
41+
]
42+
},
43+
{
44+
"cell_type": "code",
45+
"execution_count": null,
46+
"id": "919b4389",
47+
"metadata": {},
48+
"outputs": [],
49+
"source": [
50+
"!pip install --upgrade oci"
6351
]
6452
},
6553
{
@@ -76,7 +64,7 @@
7664
"import os\n",
7765
"import logging\n",
7866
"\n",
79-
"REGION = \"us-ashburn-1\"\n",
67+
"REGION = f\"<replace-with-region>\"\n",
8068
"logger = logging.getLogger('upload_model_artifact')\n",
8169
"logger.setLevel(logging.DEBUG)\n",
8270
"ch = logging.StreamHandler()\n",
@@ -113,7 +101,7 @@
113101
"metadata": {},
114102
"outputs": [],
115103
"source": [
116-
"SERVICE_ENDPOINT = \"https://datascience.us-phoenix-1.oci.oraclecloud.com/20190101\"\n",
104+
"SERVICE_ENDPOINT = f\"<replace-with-service-endpoint>\"\n",
117105
"\n",
118106
"def create_data_science_client(config: dict) -> DataScienceClient:\n",
119107
" \"\"\"\n",
@@ -147,8 +135,8 @@
147135
"source": [
148136
"# set Compartment Id\n",
149137
"COMPARTMENT_ID = os.environ['NB_SESSION_COMPARTMENT_OCID']\n",
150-
"PROJECT_DESCRIPTION = \"Test Project\"\n",
151-
"PROJECT_DISPLAY_NAME = \"ModelStore-ArtifactTest\"\n",
138+
"PROJECT_DESCRIPTION = f\"<replace-with-your-project-description>\"\n",
139+
"PROJECT_DISPLAY_NAME = f\"<replace-with-your-project-display-name>\"\n",
152140
"\n",
153141
"data_science_models = oci.data_science.models\n",
154142
"\n",
@@ -164,6 +152,7 @@
164152
" \"\"\"\n",
165153
"\n",
166154
" logger.info(\"Defining project details object...\")\n",
155+
" \n",
167156
" # We need to create a project first. Get the create project details object.\n",
168157
" create_project_details_object = data_science_models.CreateProjectDetails()\n",
169158
"\n",
@@ -249,7 +238,7 @@
249238
" logger.info(\"Defining Model details object...\")\n",
250239
" create_model_details_object = data_science_models.CreateModelDetails()\n",
251240
" create_model_details_object.compartment_id = COMPARTMENT_ID\n",
252-
" create_model_details_object.display_name = \"MD-ModelArtifact-test\"\n",
241+
" create_model_details_object.display_name = f\"<replace-with-your-object-display-name>\"\n",
253242
" create_model_details_object.project_id = project_id\n",
254243
" return create_model_details_object\n",
255244
"\n",
@@ -303,14 +292,55 @@
303292
"id": "ec13aee5",
304293
"metadata": {},
305294
"outputs": [],
306-
"source": []
295+
"source": [
296+
"# provide the artifact file path\n",
297+
"ARTIFACT_FILE_NAME = f\"<replace-with-your-artifact-file-path>\"\n",
298+
"\n",
299+
"def upload_model_artifact(model_id: str):\n",
300+
" try:\n",
301+
" logger.info(\"uploading model artifact...\")\n",
302+
" # creates the model and model artifact\n",
303+
" # Make sure to provide the correct path of the zip file in ARTIFACT_FILE_NAME\n",
304+
" create_model_artifact(data_science_client, model_id)\n",
305+
" except Exception as e:\n",
306+
" return str(e)\n",
307+
" return \"True\"\n",
308+
"\n",
309+
"def create_model_artifact(data_science_client: DataScienceClient, model_id: str):\n",
310+
" \"\"\"\n",
311+
" Creates the model artifact.\n",
312+
"\n",
313+
" Parameters:\n",
314+
" data_science_client (DataScienceClient): the data science client\n",
315+
" model_id (str): the model id to use in creating artifact\n",
316+
" \"\"\"\n",
317+
" logger.info(\"Create artifact\")\n",
318+
" f = open(ARTIFACT_FILE_NAME, \"rb\")\n",
319+
" logger.info(\"File open\")\n",
320+
" content_disposition = \"attachment;filename={}\".format(ARTIFACT_FILE_NAME)\n",
321+
" logger.debug(content_disposition)\n",
322+
" logger.debug(data_science_client)\n",
323+
" try:\n",
324+
" data_science_client.base_client.timeout = 30 * 60\n",
325+
" data_science_client.create_model_artifact(\n",
326+
" model_id, f, content_disposition=content_disposition)\n",
327+
" logger.info(\"Upload success\")\n",
328+
" print(\"Upload Success\")\n",
329+
" logger.info(\"Finished creating artifact\")\n",
330+
" except Exception as e:\n",
331+
" print(\"==================\", str(e))\n",
332+
" logger.error(\"Upload error\")\n",
333+
" logger.debug(str(e))\n",
334+
" f.close()\n",
335+
" return"
336+
]
307337
}
308338
],
309339
"metadata": {
310340
"kernelspec": {
311-
"display_name": "Python [conda env:dataexpl_p37_cpu_v2]",
341+
"display_name": "Python 3",
312342
"language": "python",
313-
"name": "conda-env-dataexpl_p37_cpu_v2-py"
343+
"name": "python3"
314344
},
315345
"language_info": {
316346
"codemirror_mode": {
@@ -322,9 +352,9 @@
322352
"name": "python",
323353
"nbconvert_exporter": "python",
324354
"pygments_lexer": "ipython3",
325-
"version": "3.7.10"
355+
"version": "3.8.8"
326356
}
327357
},
328358
"nbformat": 4,
329359
"nbformat_minor": 5
330-
}
360+
}

0 commit comments

Comments
 (0)