The great thing about APIs is that we can start right away without too much preparation!
In this sprint, we will use the OpenAI API for completions and embeddings.
Resource: OpenAI API docs
Typically, it’s as simple as this:
For the sprint, we have hosted some models in Azure.
import os
from llm_utils.client import get_openai_client, OpenAIModels
print(f"GPT3: {OpenAIModels.GPT_3.value}")
print(f"GPT4: {OpenAIModels.GPT_4.value}")
print(f"Embedding model: {OpenAIModels.EMBED.value}")
MODEL = OpenAIModels.GPT_4.value
client = get_openai_client(
model=MODEL,
config_path=os.environ.get("CONFIG_PATH")
)
GPT3: gpt3
GPT4: gpt4
Embedding model: embed
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "How old is the earth?",
}
],
model=MODEL
)
# check out the type of the response
print(f"Response: {type(chat_completion)}") # a ChatCompletion object
Response: <class 'openai.types.chat.chat_completion.ChatCompletion'>
# print the message we want
print(f"\nResponse message: {chat_completion.choices[0].message.content}")
# check the tokens used
print(f"\nTotal tokens used: {chat_completion.usage.total_tokens}")
Response message: The Earth is approximately 4.54 billion years old.
Total tokens used: 25
Seminar: LLM, WiSe 2024/25