OpenAI Service is a cloud-based service that allows you to use OpenAI's large language models (LLMs) in your applications. LLMs are powerful AI models that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
Infrastructure of OpenAI Service
Before using OpenAI Service for any LLM deployment, is important to understand the underlying infrastructure. Ensure that you have set up any relevant accounts and have full access before working on your model. The infrastructure can be broken into two areas: architecture and components.
The architecture of OpenAI Service is relatively simple. The service consists of two main components:
- The backend is responsible for managing the LLM models and handling API calls. The backend is hosted on Azure Kubernetes Service (AKS).
- The frontend is a web-based interface that allows you to interact with the service. The frontend is hosted on Azure App Service.
The following are the components that are required to configure, deploy, and consume OpenAI Service:
- Azure account: You need an Azure account to create a resource in the Azure portal.
- Azure CLI: You can use the Azure CLI to configure a resource and deploy an LLM model.
- Your application: You can use your application to consume the LLM model.
Security Elements of OpenAI Service
While we previously covered the components of infrastructure needed to complete the above tasks, it is also important to understand the layers of security and how they work with your data. OpenAI Service uses several security measures to protect your data, including:
- Authentication: You need to provide an access key to authenticate your API calls.
- Authorization: You can specify permissions for each API call.
- Encryption: Your data is encrypted in transit and at rest.
Three Steps for Using OpenAI Service
- Configure - The first step is to configure a resource in the Azure portal. This will create a unique endpoint and access key that you will need to use to authenticate your API calls. Once you have configured a resource, you can deploy an LLM model to your application.
- Deploy - To deploy an LLM model, you need to provide the model's name and version. You can also specify a content filter, which is a set of words or phrases that you want to exclude from the model's output. Once you have deployed an LLM model, you can start consuming it in your application.
- Consume - To consume an LLM model, you need to make an API call to the model's endpoint. The API call will take a text prompt as input and return the model's output. The output can be a piece of text, a translation, or an answer to a question.
OpenAI Service is a powerful tool that can be used to improve the functionality of your applications. By following the steps outlined in this blog, you can configure, deploy, and consume OpenAI Service in your applications.