The OpenAI API

Let’s get started

The great thing about APIs is that we can start right away without too much preparation!

In this sprint, we will use the OpenAI API for completions and embeddings.

Resource: OpenAI API docs

Authentication

Typically, it’s as simple as this:

# setting up the client in Python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.environ.get("OPENAI_API_KEY")
)

Authentication for the seminar

For the sprint, we have hosted some models in Azure.

import os
from llm_utils.client import get_openai_client, OpenAIModels

print(f"GPT3: {OpenAIModels.GPT_3.value}")
print(f"GPT4: {OpenAIModels.GPT_4.value}")
print(f"Embedding model: {OpenAIModels.EMBED.value}")

MODEL = OpenAIModels.GPT_4.value

client = get_openai_client(
    model=MODEL,
    config_path=os.environ.get("CONFIG_PATH")
)
GPT3: gpt3
GPT4: gpt4
Embedding model: embed

Creating a completion

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "How old is the earth?",
        }
    ],
    model=MODEL 
)

# check out the type of the response

print(f"Response: {type(chat_completion)}") # a ChatCompletion object
Response: <class 'openai.types.chat.chat_completion.ChatCompletion'>

Retrieving the response

# print the message we want
print(f"\nResponse message: {chat_completion.choices[0].message.content}")

# check the tokens used 
print(f"\nTotal tokens used: {chat_completion.usage.total_tokens}")

Response message: The Earth is approximately 4.54 billion years old.

Total tokens used: 25