You can find a summary of the code updates in this pull request . Read on for explanations.
1. Chat Completion Configuration
Let’s start by updating some Chat Completion configurations.
JSON mode
JSON mode ensures a JSON response, which is easier to parse (and tends to be less wordy, too!)
route.ts
Temperature
Temperature determines how random the output is (the value is a number between 0 and 2, with 0 being more deterministic and stable, and 2 being the most randomized). The default value is 1; let’s use 0.7 to make it just a little less random:
route.ts
Max Tokens
Just in case our prompt inadvertently causes ChatGPT to talk at length, let’s limit the output. The max_tokens
option will cut off the response after a prescribed number of tokens .
route.ts
2. Messages
Last workshop, we sent only one message (as a user). For the color prompt, let’s send two messages :
- a
system
message with instructions that would apply to any quote - a
user
message that supplies the quote
System message
It took me a bunch of experimentation to come up with this message in src/app/api/get-quote-styles/route.ts, but it works well for my purposes. If you came up with a message you like better, I’d love to hear about it in a Substack comment, or in a LinkedIn or Twitter post.
Here’s my system message:
route.ts
You may have noticed a cryptic TypeScript error when tried to add this message to the chat completion options, something like this:
To avoid this error, we need to specify as const
for the "system"
value:
route.ts
You can read more about this requirement in this GitHub issue and the TypeScript docs on literal inference .
User message
The user message supplies the quote that we promised in the system message. To satisfy TypeSript, we need to use as const
for the "user"
value as well.
route.ts
3. JSON response
Let’s parse the chat completion and send the data back to the user according to the spec.
Parsing the completion
We might start with something like:
route.ts
Unfortunately, this triggers a TypeScript error:
TypeScript is upset that completion.choices[0].message.content
could be null, and JSON.parse
can’t take a null
argument. We’ll can to account for that with an early return:
route.ts
Returning JSON
Now we can return the JSON according to the spec:
route.ts
try/catch
Finally, let’s wrap the JSON.parse in a try
/catch
block, just in case the completion returns invalid JSON (if rawStyles
is not valid JSON, then JSON.parse()
will throw an error).
route.ts
4. View the result
You can access http://localhost:3000/api/get-quote-styles to see the JSON response. You’ll get something like this:
(Any grammatical errors in the quotes will be corrected by OpenAI in a future workshop! 😊)
Up next
In the next workshop , we’ll apply these colors to the quote in the UI.
Workshops in this series
- OpenAI Node SDK
- Prompt engineering: color
- Apply color response
- Prompt engineering: quote cleanup and font
- Apply font response (coming October 10, 2024)
- Quote entry (coming October 17, 2024)