challenges

AI Style Generator: Part 1 (Solution)

You can find a summary of the code updates in this pull request . Read on for explanations.

1. Install openai

Since this project doesn’t come with openai installed, we’ll need to install it from the terminal:

npm install openai

2. Set up API key

When we use the OpenAI Node SDK, we’ll need our API key to be accessible as an environment variable .

For Next.js, this means defining an environment variable called OPEN_API_KEY in .env.local .

  1. Create a .env.local file at the top of your project.
  2. Make sure .env.local is excluded from git in .gitignore
    • .env.local is excluded already for this project, since the project was created with create-next-app . It’s good to double-check, though, since you never want your private API key to be pushed to GitHub.
  3. Add a line to the .env.local file with these contents (replace sk-proj-xxx with your API key):

.env.local

OPENAI_API_KEY="sk-proj-xxx"

3. Create a chat completion

Okay, now we’re in good shape to use the OpenAI Node SDK in our route handler. The file to edit is src/app/api/get-quote-styles/route.ts.

Create OpenAI client

We can use the default export from the “openai” package create a new client that uses the API key from the environment variable we set up:

route.ts

import OpenAI from "openai";
 
const openai = new OpenAI({
  apiKey: process.env["OPENAI_API_KEY"],
});

Side note: we could omit line 4, since process.env["OPENAI_API_KEY"] is the default value for apiKey, but I like to leave it in so a reader can tell where the API key is coming from without having to go to the OpenAI SDK docs to find out what the default is.

Call Chat Completion

I modeled this Chat Completion call after the usage example . Because openai.chat.completions.create() is an async function, we need to add async to the parent function declaration (GET()).

route.ts

export async function GET() {
  const generatedQuote = getRandomQuote();
 
  const completion = await openai.chat.completions.create({
    messages: [
      { 
        role: "user", 
        content: "Describe Dolly Parton in three words" 
      },
    ],
    model: "gpt-3.5-turbo",
  });
 
  return NextResponse.json({ quote: generatedQuote });
}

4. View the response

The response text is accessed via completion.choices[0].message.content, according to the Open AI Chat Completions response format .

route.ts

export async function GET() {
  const generatedQuote = getRandomQuote();
 
  const completion = await openai.chat.completions.create({
    messages: [
      { 
        role: "user", 
        content: "Describe Dolly Parton in three words" 
      },
    ],
    model: "gpt-3.5-turbo",
  });
 
  const responseText = completion.choices[0].message.content;
  console.log("===========> RESPONSE", responseText);
 
  return NextResponse.json({ quote: generatedQuote });
}

I included a console.log statement so we can see the output on the terminal (since there are no UI updates in this workshop). I triggered the route handler function by accessing the endpoint ( http://localhost:3000/api/get-quote-styles ) from the browser. Here’s what I got in the terminal:

 GET / 200 in 127ms
 Compiled /api/get-quote-styles in 97ms (391 modules)
===========> RESPONSE Talented, iconic, philanthropic
 GET /api/get-quote-styles 200 in 855ms

Up next

All right! Now that we’ve got the OpenAI SDK set up, we’re in good shape to generate a color for the random quote in the next workshop .