Until recently, data scientists had only a handful of tools to work with, but today there is a robust ecosystem of frameworks and hardware runtimes. While this growing toolbox is extremely useful, each framework has the potential to become a silo, lacking interoperability. Supporting interoperability requires customization, and reimplementing models for movement between frameworks can slow development by weeks or months. The Open Neural Network Exchange (ONNX) format was created to ease the process of model porting between frameworks, some of which may be more desirable for specific phases of the development cycle, such as faster inferencing. The idea is that you can train a model with one tool stack and then deploy it using another for inference and prediction.

In ONNX format, the machine learning model is represented as a computational graph structure with operators and metadata describing the model. It is portable across frameworks, and every framework supporting ONNX provides implementations of these operators. The ONNX libraries contain tools to read and write ONNX models, make predictions, and draw graphs of the data flow.

The Oracle Machine Learning Services REST API (OML Services) is included with Oracle Machine Learning on Oracle Autonomous Database cloud service. In addition to in-database models and cognitive text, OML Services enables ONNX format model deployment through REST endpoints for regression models and classification models. For classification models, both non-image models and image models are supported.

In this tutorial, you’ll learn how to:

  • Train a scikit-learn xgboost model
  • Convert the model to ONNX format
  • Deploy the model to OML Services on Autonomous Database

Step 1: Train a Python XGBoost model

We will create a machine learning model that can predict average house price based upon its characteristics. We’ll use the popular Boston Housing price dataset, which contains the details of 506 houses in Boston, to build a regression model.

To start, import the dataset and store it in a variable called boston.

from sklearn.datasets import load_boston
boston = load_boston()

The boston variable is a dictionary, and you can view its keys using the keys method and its shape by using the data.shape attribute, which will return the size of the dataset in rows and columns. Use the feature_names attribute to return the feature names.

Next, separate the data into target and predictor variables. Then split the data into train and test sets.

We use the train_test_split function from sklearn’s model_selection module with test size equal to 30% of the data. A random state is assigned for reproducibility.

from sklearn.model_selection import train_test_split
x, y = boston.data, boston.target
xtrain, xtest, ytrain, ytest=train_test_split(x, y, test_size=0.30, random_state=99)

For the regression model, we’ll use the XGBRegressor class of the xgboost package with the hyper-parameter values passed as arguments. We’ll initialize the regressor object, fit the regressor to the training set, and print the model parameters.

import xgboost as xgb
model = xgb.XGBRegressor(objective='reg:squarederror',colsample_bytree = 0.3, learning_rate = 0.1, max_depth = 5,alpha = 10, n_estimators = 10)
print(model)
XGBRegressor(alpha=10, base_score=None, booster=None, colsample_bylevel=None,
colsample_bynode=None, colsample_bytree=0.3, gamma=None,
             gpu_id=None, importance_type='gain', interaction_constraints=None,
             learning_rate=0.1, max_delta_step=None, max_depth=5,
             min_child_weight=None, missing=nan, monotone_constraints=None,
             n_estimators=10, n_jobs=None, num_parallel_tree=None,
             random_state=None, reg_alpha=None, reg_lambda=None,
             scale_pos_weight=None, subsample=None, tree_method=None,
             validate_parameters=None, verbosity=None)

Now, we’ll train the model using the fit method and make predictions using the predict method on the model.

model.fit(xtrain, ytrain) pred = model.predict(xtest)

Compute the RMSE by invoking the mean_squared_error function from sklearn’s metrics module. The RMSE for the price prediction is approximately 10.4 per $1000.

import numpy as np
from sklearn.metrics import mean_squared_error 
rmse = np.sqrt(mean_squared_error(ytest, pred))
print("RMSE: %f" % (rmse))
RMSE: 10.391891

Step 2: Convert the model to ONNX format

To convert the xgboost model to ONNX, we need the model in .onnx format, zipped together with a  metadata.json file. To start, import the required libraries and set up the directories on the file system where the ONNX model will be created.

import onnxmltools import json
from zipfile import ZipFile
from skl2onnx.common.data_types import FloatTensorType
import os
home = os.path.expanduser('~')
target_folder = os.path.join(home, 'onnx_test' )
try:    
   os.makedirs(target_folder)
except:    
   pass
os.chdir(target_folder)

Now define the model inputs to the ONNX conversion function convert_xgboost. scikit-learn does not store information about the training data, so it is not always possible to retrieve the number of features or their types. For this reason, convert_xgboost contains an argument called initial_types to define the model input types.

For each numpy array (called a tensor in ONNX) passed to the model, choose a name and declare its data type and shape. Here, float_input is the chosen name of the input tensor. The shape is defined as None, xtrain.shape[1], the first dimension is the number of rows, and the second is the number of features. The number of rows is undefined as the the number of requested predictions is unknown at the time the model is converted.

initial_types = [('float_input', FloatTensorType([None, xtrain.shape[1]]))]

Now that the model inputs are defined, we are ready to convert the xgboost model to ONNX format. We use convert_xgboost from OnnxMLTools and save the model to file xgboost.onnx

onnx_model = onnxmltools.convert_xgboost(model, initial_types=initial_types)
onnxmltools.utils.save_model(onnx_model, './xgboost_boston.onnx')

Ensure that your metadata.json file contains the information as listed in the table Contents and Description of metadata.json file in Deploy ONNX Format Models.

The function field in the metadata.json file is required for all models. In this case, the value for function in metadata.json is regression. Add the metadata and compress the file, creating onnx_xgboost.model.zip.

metadata = { "function": "regression", }
with open('./metadata.json', mode='w') as f: 
       json.dump(metadata, f)
with ZipFile('./onnx_xgboost.model.zip', mode='w') as zf:
    zf.write('./metadata.json')
    zf.write('./xgboost_boston.onnx')

Examine the string representation of the ONNX model. It contains the version of OnnxMLTools used to create the ONNX model, and a text representation of the graph structure, including the input types defined earlier. Note, the model can be viewed in graphical format using netron.

print(str(onnx_model))
ir_version: 7 producer_name: "OnnxMLTools"
producer_version: "1.7.0"
domain: "onnxconverter-common"
model_version:0
doc_string: ""...

Now score the data using the ONNX Runtime environment to validate the ONNX model is working properly.

After importing the ONNX Runtime library, load the ONNX model in the runtime environment, get the model metadata to map the input to the runtime model, and then retrieve the first 10 predictions.

Note, at the time of this writing OML Services supports ONNX Runtime version 1.4.0.

# Import the ONNX runtime environment
import onnxruntime as rt
# Setup runtime. This instantiates an ONNX inference session and loads the persisted model.
sess = rt.InferenceSession("xgboost_boston.onnx")
# Get model metadata to enable mapping of new input to the runtime model
input_name = sess.get_inputs()[0].name
label_name = sess.get_outputs()[0].name
# Create predictions. The inputs are the xtest values, and they are being casted as type float32.
pred_onnx = sess.run([label_name], {input_name: xtest.astype(np.float32)})[0]
# Print first 10 predictions
print("Prediction:n", pred_onnx[0:10])
Prediction:
 [[22.379318 ]
 [21.797579 ]
 [18.714703 ]
 [16.052345 ]
 [22.825737 ]
 [15.144974 ]
 [ 7.7380514]
 [14.48102  ]
 [12.728854 ]
 [ 9.806006 ]]

Verify the ONNX and local scikit-learn predictions are similar.

test_count = 10
test_cases = xtest[0:test_count]
local_pred = model.predict(test_cases)
print(f"Local predictions are: {local_pred}")
Local predictions are: [22.37932 21.797579 18.714703 16.052345 22.825739 15.144974 7.7380514 14.481019 12.728853 9.806006]

Step 3: Deploy the ONNX model in OML Services

OML Services is REST API that uses an Oracle Autonomous Database as the back end repository. The OML Services REST API supports the following functions for OML models and ONNX format models:

  • Storing, deleting, and listing of deployed models
  • Retrieving metadata and content of models
  • Organizing models under namespace
  • Creating, deleting, and listing of model endpoints
  • Getting model APIs
  • Scoring with models

To access OML Services using the REST API, you must provide an access token. To authenticate and obtain an access token, we use cURL with the -d option to pass the user name and password for your OML Services account against the OML User Management Cloud Service REST endpoint /oauth2/v1/token.

Note that while we are using cURL, any REST client such as Postman (or even PL/SQL) can be used with OML Services.

Exchange your OML credentials for a bearer token.

export omlserver=ADBURL
export tenant=TENANCYOCID
export database=DBNAME
export username=USERNAME
export password=PASSWORD

where ADBURL is the Autonomous Database URL, TENANCYOCID is the Autonomous Database tenancy OCID, DBNAME is the pluggable database name, USERNAME is the OML user name, and PASSWORD is the OML user password.

$ curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' 
     -d '{"grant_type":"password",    "username":"'${username}'", "password":"'${password}'"}'     "${omlserver}/omlusers/tenants/${tenant}/databases/${database}/api/oauth2/v1/token"
 {"accessToken":"eyJhbGci....6zIw==","expiresIn":3600,"tokenType":"Bearer"}

Copy the accessToken field from the response and assign it to variable token, surrounded by single quotes. A token has a lifecycle of 3600 seconds, or 1 hour, and it can be refreshed for up to 8 hours.

$ export token='eyJhbGci....6zIw=='

In order to make the model available through the REST API, we have to save it to the OML Services repository using a POST request. This stores the model in the repository, generates the unique model ID, and provides the URI of the repository where the model is stored.

This method takes the binary data for model, model name, description, model version, model type, model namespace, and if the model is shared among tenancy users as inputs.

The Linux command line JSON parser jq is used to format the output. To install jq on your Linux REST client, run the command sudo yum install jq.

$ curl -X POST --header "Authorization: Bearer $token" 
       ${omlserver}/omlmod/v1/models 
       --header 'content-type: multipart/form-data; boundary=Boundary' 
       --form modelData=@onnx_xgboost.model.zip 
       --form modelName=onnx_model 
       --form modelType=ONNX 
       --form 'description=Saving ONNX XGBoost model' 
       --form 'namespace=ONNX_MODELS' 
       --form version=1.0 
       --form shared=true  | jq

The returned value contains the model ID and reference.

"modelId": "c8c9f7d0-3e4a-4c60-bcd7-141c9116c064",
      "links": [
    {
      "rel": "self",
      "href": "https://adb.us-sanjose-1.oraclecloud.com/omlmod/v1/models/c8c9f7d0-3e4a-4c60-bcd7-141c9116c064"
    }
  ]
}

Next, deploy the model by creating the model scoring endpoint identified by the model ID and requested URI, onnx_model.

$ curl -X POST "${omlserver}/omlmod/v1/deployment" 
       --header 'Content-Type: application/json' 
       --header "Authorization: Bearer ${token}" 
       --data '{
        "modelId":"c8c9f7d0-3e4a-4c60-bcd7-141c9116c064",
        "uri":"onnx_model"}' | jq

This returns the model ID, URI, and a time stamp containing the deployment information:

{
  "links": [
    {
      "rel": "self",
      "href": "https://adb.us-sanjose-1.oraclecloud.com/omlmod/v1/deployment/onnx_model"
    }
  ],
  "modelId": "c8c9f7d0-3e4a-4c60-bcd7-141c9116c064",
  "uri": "onnx_model",
  "deployedOn": "2021-06-13T19:37:47.272Z"
}

Now that the model is saved in the model repository, you can view it in the OML Notebooks area in Autonomous Database. Select the hamburger menu, then navigate to Models and look for the deployed model under Deployments. Select the model name to view the model metadata, and select the model URI to view the Open API specification for the model.

You can also view the model endpoint details from the REST API. For example, the deployment endpoint, identified by the model URI, returns the model ID, URI, and date of deployment:

$ curl -X GET "${omlserver}/omlmod/v1/deployment/onnx_model" 
       --header "Authorization: Bearer ${token}" | jq
{
        "links": [
        {
          "rel": "self",
          "href": "https://adb.us-sanjose-1.oraclecloud.com/omlmod/v1/deployment/onnx_model"
        }
      ],
      "modelId": "c8c9f7d0-3e4a-4c60-bcd7-141c9116c064",
      "uri": "onnx_model",
      "deployedOn": "2021-06-13T19:37:47.272Z"
    }

Now let’s make predictions on the model we just deployed and compare them against the XGBRegressor.predict method used locally. A single score is returned:

$ curl -X POST "${omlserver}/omlmod/v1/deployment/onnx_model/score" 
       --header "Authorization: Bearer ${token}" 
       --header 'Content-Type: application/json' 
       --data '{"inputRecords": [{"float_input": [[0.00632, 18.0, 2.31, 0.0, 0.538, 6.575, 65.2, 4.0090, 1.0, 296.0, 15.3, 369.90, 4.98]]}]}' | jq
{

  "scoringResults": [
    {
      "regression": 20.695398330688477
    }
  ]
}

When you are ready to delete the model, first delete the model endpoint, then delete the stored model using the model ID:

$ curl -i -X DELETE "${omlserver}/omlmod/v1/deployment/onnx_model"    --header "Authorization: Bearer ${token}"

HTTP/1.1 204 No Content Date: Mon, 28 June 2020 01:12:13 GMT Connection: keep-alive Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Access-Control-Allow-Headers: X-Requested-With, Content-Type

$ curl -i -X DELETE "${omlserver}/omlmod/v1/models/c8c9f7d0-3e4a-4c60-bcd7-141c9116c064"    --header "Authorization: Bearer ${token}"

HTTP/1.1 204 No Content Date: Mon, 28 June 2020 01:12:13 GMT Connection: keep-alive Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, DELETE, PUT Access-Control-Allow-Headers: X-Requested-With, Content-Type

Attempting to delete a deployed model from the OML Services REST API will result in an error that the model is currently deployed.

To learn more about ONNX and Oracle Machine Learning Services refer to these documentation resources and our Ask Tom Office Hours library.

Special thanks to Dongfang Bai and Ning Hao for their input on this blog.



Source link

Leave a Reply

Your email address will not be published.