The author selected Direct Relief Program to receive a donation as part of the Write for DOnations program.
OpenAI GPT models have gained popularity due to their wide use in generating text content for various tasks such as drafting emails, answering customer service FAQs and language translation, to name a few.
These GPT models are generally utilized via ChatGPT, a chatbot released by OpenAI, or through APIs and libraries that provide greater control. This tutorial will guide you on leveraging these models using the OpenAI API within your Django web project. You will learn how to call the ChatCompletion API using different parameters and how to format and utilize its responses.
By the end of this tutorial, you will have created a Django endpoint that, when called, sends a request to OpenAI to construct a short story using the provided words and returns its response.
To complete this tutorial, you will need:
An existing Django project. If you’re starting from scratch, you can set up a Django project by following the how to set up a Django dev environment tutorial.
An OpenAI account: Go to the OpenAI platform website and look for the ‘Sign Up’ button. After signing up, you must verify your email address and input personal information.
An OpenAI API key: Once your account is set up, log in and navigate to the API Keys section from your dashboard. Click on ‘Create new secret key’. Your API key will be generated and look something like sk-abcdefghijklmnop
. Make sure to save this key in a secure location, as you will not be able to see it again.
The OpenAI Python package: If you followed the tutorial in the first prerequisite, you should already have a virtual environment named env
active within a directory named django-apps
. Ensure your virtual environment is active by confirming that its name appears in parentheses at the start of your terminal prompt. If it’s not active, you can manually activate it by running the command:
sammy@ubuntu:$ .env/bin/activate
In your terminal from the django-apps
directory. Once your environment is active, run the following to install the OpenAI Python package:
(env)sammy@ubuntu:$ pip install openai
In this step, you will add your OpenAI API key to the OpenAI client and make a simple API call to the ChatCompletion API. You will also examine the response you get from the API.
To get started, open your Python interpreter:
(env)sammy@ubuntu:$ python
First, import the OpenAI client and add your API key to the client:
from openai import OpenAI
client = OpenAI(api_key="your-api-key")
Replace "your-api-key"
with the actual API key you got from the OpenAI platform.
Now, let’s make an API call to the ChatCompletion API. Use the chat.completions.create()
method:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "count 1 to 10"}])
In the code above, we specified the model to use as gpt-3.5-turbo
, added a single message object containing the role user
(other options are system
and assistant
) and the content / prompt count 1 to 10
.
To see the response from the API call, you can print the response message, which should contain the numbers 1 through 10 in a nice little list:
print(response.choices[0].message.content)
Output:
Output1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Congratulations! You have successfully made a simple API call to OpenAI and retrieved a response. We will format and utilize the API response to create a short story in the next steps.
Now that you have successfully made a simple API call to the ChatCompletion API, let’s explore how to work with parameters to customize the model’s behavior. There are several parameters available that allow you to control the generation of text. We will take a look at the three below.
1. Temperature: The temperature parameter determines how random the generated content is. A higher temperature value, like 0.8, will give more diverse and creative responses, while a lower temperature value, such as 0.1, will produce more similar responses. For example:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], temperature=0.1)
print(response.choices[0].message.content)
Output1. Apple
2. Elephant
3. Sunshine
4. Adventure
5. Serenity
Let’s try temperature=0.1
again to see the newly generated text:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], temperature=0.1)
print(response.choices[0].message.content)
Output1. Apple
2. Elephant
3. Sunshine
4. Adventure
5. Serenity
The text turned out to be the same. Now, let’s try temperature=0.8
twice:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], temperature=0.8)
print(response.choices[0].message.content)
cat, apple, guitar, sky, book
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], temperature=0.8)
print(response.choices[0].message.content)
Output1. Apple
2. Sunshine
3. Happiness
4. Love
5. Technology
2.Max tokens: This allows you to limit the length of the generated text. Setting a specific value ensures the response doesn’t exceed a certain number of tokens. Tokens are proportional to the number of words in the response. For example:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], max_tokens=10)
print(response.choices[0].message.content)
Output1. Apple
2. Car
Changing the value to 20:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], max_tokens=20)
print(response.choices[0].message.content)
Output1. Apple
2. Car
3. Music
4. Ocean
5. Love
3. Stream: This determines whether the responses should be streamed or returned. When set to True
, the API response will be streamed, meaning you will receive the output in chunks as it is generated. This is useful for long conversations or real-time applications. To enable streaming, add the stream
parameter with a value of True
to the API call. For example:
response = client.chat.completions.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "mention five words"}], stream=True)
collected_messages = []
for chunk in response:
... chunk_message = chunk.choices[0].delta.content
... if chunk_message is not None:
... collected_messages.append(chunk_message)
print(collected_messages)
Output['', 'cat', '\n', 'book', '\n', 'computer', '\n', 'sun', '\n', 'water']
In the code above, the chunk_message
variable holds the message content in each chunk returned by the API. Before adding each to the collected_messages
list, we check whether the chunk is None
as the content of the final chunk is usually None
.
Utilizing these parameters allows you to customize the model’s behavior and control the generated responses to better suit your application or project. Experiment with different values to achieve the desired results.
In the next step, we will provide some context to the model in the form of a system prompt.
In this step, we will combine all the information we’ve learned and create a system prompt that provides context to the GPT model, telling it its purpose and specifying its rules.
First, let’s create a Python module containing a function to handle this task. Close the interpreter and create a new file called story_generator.py
in your Django project directory.
(env)sammy@ubuntu:$ touch ~/my_blog_app/blog/blogsite/story_generator.py
Next, you can add the OpenAI API key to your environmental variables so that you don’t add it directly to the Python file:
(env)sammy@ubuntu:$ export OPENAI_KEY="your-api-key"
Open story_generator.py
and inside it, create an openai client and define a function called generate_story
that takes a collection of words as input:
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_KEY"])
def generate_story(words):
# Call the OpenAI API to generate the story
response = get_short_story(words)
# Format and return the response
return format_response(response)
In this function, we call a separate function, get_short_story
to make the API call to OpenAI for the story and then another function, format_response,
to format the response from the API.
Now, let’s focus on the get_short_story
function. Add the following to the bottom of your story_generator.py
file:
def get_short_story(words):
# Construct the system prompt
system_prompt = f"""You are a short story generator.
Write a short story using the following words: {words}.
Do not go beyond one paragraph."""
# Make the API call
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{
"role": "user",
"content": system_prompt
}],
temperature=0.8,
max_tokens=1000
)
# Return the API response
return response
In this function, we first set up the system prompt, which informs the model about the task it needs to perform and specifies how long the story should be. We then pass this system prompt to the ChatCompletion API and return its response.
Finally, we can implement the format_response
function. Add the following to the bottom of your story_generator.py
file:
def format_response(response):
# Extract the generated story from the response
story = response.choices[0].message.content
# Remove any unwanted text or formatting
story = story.strip()
# Return the formatted story
return story
You can now test these functions by calling the generate_story
function, passing a collection of words as its argument, and printing its response. Add the following to the bottom of your story_generator.py
file:
print(generate_story("cat, book, computer, sun, water"))
Now save and exit the file. Run the script to see the story generated:
(env) sammy@ubuntu:$ python ~/my_blog_app/blog/blogsite/story_generator.py
Output:
In a cozy corner of a sunlit room, a fluffy cat named Whiskers lounged lazily next to a towering bookshelf. Amongst the rows of books, a curious computer hummed softly. As the sun streamed through the window, casting a warm glow, Whiskers noticed a small water stain on the shelf. Intrigued, the cat pawed at the book closest to the mark. As if guided by fate, the book opened to reveal a hidden compartment containing a glittering diamond necklace. With the secret now unveiled, Whiskers embarked on an unexpected adventure, where the sun, water, and the power of knowledge merged into a thrilling tale of mystery and discovery.
Pretty interesting! Let’s delete the line with the print
statement since we will call the generate_story
function from a Django view. Remove the highlighted line from your story_generator.py
file:
print(generate_story("cat, book, computer, sun, water"))
Feel free to experiment with the system prompt and add more context and rules to improve the stories generated.
Continue to the next step to integrate the story_generator
module into your Django project.
You must create a Django view and URL route to integrate the story_generator
module into your Django project. In the view, you will extract the expected words from the request, call the generate_story
function, and return the response.
First, open the file views.py
in your Django app directory. Import the necessary modules and add a view function called generate_story_from_words
:
from django.http import JsonResponse
from .story_generator import generate_story
def generate_story_from_words(request):
words = request.GET.get('words') # Extract the expected words from the request
story = generate_story(words) # Call the generate_story function with the extracted words
return JsonResponse({'story': story}) # Return the story as a JSON response
Next, open the urls.py
file and add a URL pattern for the generate_story_from_words
view:
urlpatterns = [
# Other URL patterns...
path('generate-story/', views.generate_story_from_words, name='generate-story'),
]
Now, you can request the /generate-story/
endpoint. For example, to test using curl, you can make a GET request to the endpoint with the expected words as a query parameter. Open your terminal and run the following command:
(env)sammy@ubuntu:$ curl "http://your_domain/generate-story/?words=cat,book,computer,sun,water"
Make sure to replace "http://your_domain"
with the actual domain where your Django project is hosted. The words "cat,book,computer,sun,water"
represent the expected words with which you want to generate a story. You can change them to any words you prefer.
After running the command, you should see the response from the server, which will contain the generated story:
(env)sammy@ubuntu:$ curl "http://your_domain/generate-story/?words="cat,book,computer,sun,water"
Output:
{"story": "Once upon a time, in a cozy little cottage nestled amidst a dense forest, a curious cat named Whiskers sat by the window, basking in the warm rays of the sun. As Whiskers lazily flicked his tail, his eyes caught sight of a dusty book lying on a nearby shelf. Intrigued, he carefully jumped onto the shelf, causing a cascade of books to tumble down, one opening up to reveal a hidden compartment. Inside, Whiskers discovered an ancient computer, its screen flickering to life as he brushed against the power button. Mesmerized by the glowing screen, Whiskers ventured into a world of virtual landscapes, where he roamed freely, chasing digital fish and pausing to admire breathtaking waterfalls. Lost in this newfound adventure, Whiskers discovered the wonders of both the tangible and virtual worlds, realizing that true exploration knows no bounds."}
After completing this tutorial, you have learned how to integrate OpenAI GPT models into your Django project using the OpenAI API. You have made calls to the ChatCompletion API, customized the model’s behavior by working with parameters such as temperature and max tokens, and created a system prompt to provide context to the model. You have also integrated the story_generator
module into your Django project. Now, you can generate short stories by requesting the /generate-story/
endpoint with the expected words as a query parameter.
To further enhance your Django project, you can explore additional functionalities of the OpenAI API and experiment with different system prompts and parameters to generate unique and creative stories.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
Sign up for Infrastructure as a Newsletter.
Working on improving health and education, reducing inequality, and spurring economic growth? We'd like to help.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.