在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:microsoft/AIforEarth-API-Development开源软件地址:https://github.com/microsoft/AIforEarth-API-Development开源编程语言:Jupyter Notebook 76.3%开源软件介绍:Due to new featues that have since been added to Azure Machine Learning, this repository is now deprecated.AI for Earth - Creating APIsThese images and examples are meant to illustrate how to build containers for use in the AI for Earth API system. The following images and tags (versions/images) are available on Dockerhub:
NoticeAdditional to a running docker environment, GPU images require NVIDIA Docker package to support CUDA. CUDA ToolkitTo view the license for the CUDA Toolkit included in the cuda base image, click here CUDA Deep Neural Network library (cuDNN)To view the license for cuDNN included in the cuda base image, click here Contents
Repo Layout
Notes
Quickstart TutorialThis quickstart will walk you through turning a model into an API. Starting with a trained model, we will containerize it, deploy it on Azure, and expose an endpoint to call the API. We will leverage Docker containers, Azure Application Insights, Azure Container Registry, and Azure Container Instances. We are assuming that you have a trained model that you want to expose as an API. To begin, download or clone this repository to your local machine. Create an Azure Resource GroupThroughout this quickstart tutorial, we recommend that you put all Azure resources created into a single new Resource Group. This will organize these related resources together and make it easy to remove them as a single group. From the Azure Portal, click Create a resource from the left menu. Search the Marketplace for "Resource Group", select the resource group option and click Create. Use a descriptive resource group name, such as "ai4e_yourname_app_rg". Then click Create. Machine SetupYou will need an active Azure subscription as well as the following software installed.
Choose a base image or exampleAI for Earth APIs are all built from an AI for Earth base image. You may use a base image directly or start with an example. The following sections will help you decide. Base images
Examples
In general, if you're using Python, you will want to use an image or example with the base-py or blob-py images. If you are using R, you will want to use an image or example with the base-r or blob-r images. The difference between them: the blob-* image contains everything that the cooresponding base-* image contains, plus additional support for mounting Azure blob storage. This may be useful if you need to process (for example) a batch of images all at once; you can upload them all to Azure blob storage, the container in which your model is running can mount that storage, and access it like it is local storage. Asynchronous (async) vs. Synchronous (sync) EndpointIn addition to your language choice, think about whether your API call should be synchronous or asynchronous.
Asynchronous Implementation ExamplesThe following examples demonstrate async endpoints:
Synchronous Implementation ExamplesThe following examples demonstrate sync endpoints: Input/Output PatternsWhile input patterns can be used for sync or async designs, your output design is dependent on your sync/async choice, therefore, we have identified recommended approaches for each. Input RecommendationsJSONJSON is the recommended approach for data ingestion. Binary InputMany applications of AI apply models to image/binary inputs. Here are some approaches:
Asynchronous PatternThe preferred way of handling asynchronous API calls is to provide a task status endpoint to your users. When a request is submitted, a new We have several tools to help with task tracking that you can use for local development and testing. These tools create a database within the service instance and are not recommended for production use. Once a task is completed, the user needs to retrieve the result of their service call. This can be accomplished in several ways:
ExamplesWe have provided several examples that leverage these base images to make it easier for you to get started.
After you've chosen the example that best fits your scenario, make a copy of that directory, which you can use as your working directory in which you apply your changes. Insert code to call your modelNext, in your new working directory, we need to update the example that you chose with code to call your specific model. This should be done in the runserver.py file (if you are using a Python example) or the api_example.R file (if you are using an R example) in the my_api (or similarly named) subfolder. Input handlingYour model has inputs and outputs. For example, let's consider a classification model that takes an image and classifies its contents as one of multiple species of animal. The input that you need to provide to this model is an image, and the output that you provide may be JSON-formatted text of the classifications and their confidence. Some examples of how to send parameters as inputs into your APIs follow. GET URL parametersFor GET operations, best practice dictates that a noun is used in the URL in the segment before the related parameter. An echo example is as follows. Python and Flask@ai4e_service.api_sync_func(api_path = '/echo/<string:text>', methods = ['GET'], maximum_concurrent_requests = 1000, trace_name = 'get:echo', kwargs = {'text'})
def echo(*args, **kwargs):
return 'Echo: ' + kwargs['text'] R and Plumber#* @param text The text to echo back
#* @get /echo/<text>
GetProcessDataTaskStatus<-function(text){
print(text)
} POST bodyFor non-trivial parameters, retrieve parameters from the body sent as part of the request. JSON is the preferred standard for API transmission. The following gives an example of sample input, followed by Python and R usage. Sample Input{
"container_uri": "https://myblobacct.blob.core.windows.net/user?st=2018-08-02T12%3A01%3A00Z&se=5200-08-03T12%3A01%3A00Z&sp=rwl&sv=2017-04-17&sr=c&sig=xxx",
"run_id": "myrunid"
} Python and Flaskfrom flask import Flask, request
import json
post_body = request.get_json()
print(post_body['run_id'])
print(post_body['container_uri']) R and Plumberlibrary(jsonlite)
#* @post /process-data
ProcessDataAPI<-function(req, res){
post_body <- req$postBody
input_data <- fromJSON(post_body, simplifyDataFrame=TRUE)
print(input_data$run_id)
print(input_data$container_uri)
} Output handlingThen, you need to send back your model's results as output. Two return types are important when dealing with hosted ML APIs: non-binary and binary. Non-binary dataYou may need to return non-binary data, like simple strings or numbers. The preferred method to return non-binary data is to use JSON. Python and Flaskimport json
def post(self):
ret = {}
ret['run_id'] = myrunid
ret['container_uri'] = 'https://myblobacct.blob.core.windows.net/user?st=2018-08-02T12%3A01%3A00Z&se=5200-08-03T12%3A01%3A00Z&sp=rwl&sv=2017-04-17&sr=c&sig=xxx'
return json.dumps(ret) R and PlumberProcessDataAPI<-function(req, res){
post_body <- req$postBody
input_data <- fromJSON(post_body, simplifyDataFrame=TRUE)
# Return JSON containing run_id and container_uri
data.frame(input_data$run_id, input_data$container_uri)
} Binary dataYou may also need to return binary data, like images. Python and Flaskfrom io import BytesIO
import tifffile
from flask import send_file
ACCEPTED_CONTENT_TYPES = ['image/tiff', 'application/octet-stream']
if request.headers.get("Content-Type") in ACCEPTED_CONTENT_TYPES:
tiff_file = tifffile.imread(BytesIO(request.data))
# Do something with the tiff_file...
prediction_stream = BytesIO()
# Create your image to return...
prediction_stream.seek(0)
return send_file(prediction_stream) Function decorator detailWe use function decorators to create APIs out of your functions, such as those that execute a model. Here, we will detail the two decorators and their parameters. There are two decorators:
Each decorator contains the following parameters:
Create AppInsights instrumentation keysApplication Insights is an Azure service for application performance management. We have integrated with Application Insights to provide advanced monitoring capabilities. You will need to generate both an Instrumentation key and an API key to use in your application. The instrumentation key is for general logging and tracing. This is found under the "Properties" section for your Application Insights instance in the Azure portal. Click Create, then choose a name for your Application Insight resource. For Application Type, choose General from the drop-down menu. For Resource Group, select "Use existing" and choose the resource group that you created earlier. Once your AppInsights resource has successfully deployed, navigate to the resource from your home screen, and locate the Instrumentation Key. Next, create a Live Metrics API key. Scroll down in the left menu to find API Access within Application Insights, and click Create API key. When creating the key, be sure to select "Authenticate SDK control channel". Copy and store both of these keys in a safe place. Install required packagesNow, let's look at the Dockerfile in your code. Update the Dockerfile to install any required packages. There are several ways to install packages. We cover popular ones here:
RUN /usr/local/envs/ai4e_py_api/bin/pip install grpcio opencensus
RUN apt-get install gfortran -y
RUN R -e 'install.packages("rgeos"); library(rgeos)' Set environment variablesThe Dockerfile also contains several environment variables that should be set for proper logging. You will need to add your two Application Insights keys here as well. Follow the instructions within the file. # Application Insights keys and trace configuration
ENV APPINSIGHTS_INSTRUMENTATIONKEY=your_instrumentation_key_goes_here \
LOCALAPPDATA=/app_insights_data \
OCAGENT_TRACE_EXPORTER_ENDPOINT=localhost:55678
# The following variables will allow you to filter logs in AppInsights
ENV SERVICE_OWNER=AI4E_Test \
SERVICE_CLUSTER=Local\ Docker \
SERVICE_MODEL_NAME=base-py example \
SERVICE_MODEL_FRAMEWORK=Python \
SERVICE_MODEL_FRAMEOWRK_VERSION=3.6.6 \
SERVICE_MODEL_VERSION=1.0
# The API_PREFIX is the URL path that will occur after your domain and before your endpoints
ENV API_PREFIX=/v1/my_api/tasker You may modify other environment variables as well. In particular, you may want to change the environment variable API_PREFIX. We recommend using the format "/<version-number>/<api-name>/<function>" such as "/v1/my_api/tasker". (Optional) Set up Azure blob storageYou will want to follow these steps if you are working from the blob-mount-py example. If you do not plan to use Azure blob storage in your app, skip ahead to Build and run your image. Create an Azure storage account by selecting "Storage Accounts" from the left menu and clicking the Add button. Make sure to select the resource group you previously created, and use a descriptive name for your storage account (must be lowercase letters or numbers). You may configure advanced options for your account here, or simply click "Review + create". Click "Create" on the validation screen that appears. Once the storage account is deployed, click "Go to resource". You still need to create a container within your storage account. To do this, scroll down on the left menu of your storage account to click on "Blobs". Click the plus sign in the top left to create a new container. Use a text editor to create an empty file named
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论