Building a simple web app with OpenAI’s ChatGPT API, Next.js and Tailwind CSS

Tutorials
Author

David Wu

Published

March 19, 2023

We build a simple “Hello, World!”-inspired web app powered by OpenAI’s ChatGPT API with Next.js and Tailwind CSS.

You can try the version hosted on Vercel (the awesome cloud computing company founded by the creators of Next.js) here: https://bonjour-gpt.vercel.app/.

The full source code for the web app and quickstart instructions can be found on GitHub: https://github.com/david-j-wu/hello-gpt.

While building this web app, we will cover the basics of working with and understanding OpenAI’s ChatGPT, including accessing the OpenAI API, the GPT models OpenAI offers, the create chat completion endpoint and tokens, among other topics.

“Hello, GPT!”: A cropped screen recording demonstrating the functionality of the simple web app we’ll build.

Setting up a “Hello, World!” web app with Next.js and Tailwind CSS

As the starting point for our project, we use the “Hello, World!” web app with Next.js and Tailwind CSS that we built in an earlier lesson: Setting up a “Hello, World!” app with Next.js and Tailwind CSS.

If you want to get started right away, the project files are also available as a public repository on GitHub (released under the MIT License): https://github.com/david-j-wu/hello-world-nextjs-tailwind-css

The lesson itself is short and should be able to be completed in as little as 15-30 minutes if you have prior experience with Node.js and JavaScript libraries like Next.js and React.js.

It might take longer to complete if you’re relatively new to JavaScript web development. In any case, take your time and go at your own pace.

If you are using a cloned version of this web app, then your project files may not include the node_modules folder. If this is the case, simply run npm install.

Run npm run dev to launch the web app locally.

Additional resources

Before we progress further, there are several resources that could be of interest as you work through the material here.

If you’d like to understand something about the OpenAI API better, then the most authoritative, up-to-date reference is the official docs: https://platform.openai.com/docs. Similarly so for understanding the ins-and-outs of Next.js: https://nextjs.org/docs. And the same goes for Tailwind CSS: https://tailwindcss.com/docs.

For improving your understanding of the fundamentals of HTML, CSS and JavaScript more generally, it’s hard to beat MDN Web Docs: https://developer.mozilla.org/.

In addition, OpenAI has released an example Next.js web app pet name generator, openai-quickstart-node, under the MIT License using the API for the GPT 3.5 model text-davinci-003. Here, we use the API for the newer GPT 3.5 model gpt-3.5-turbo, which is the same model used in ChatGPT and 10-times cheaper than the text-davinci-003 model. With the caveat that the model used is different, the project is a fantastic learning resource: https://github.com/openai/openai-quickstart-node.

Of course, if there’s anything that you’d like to understand better, you could also ask ChatGPT: https://ai.com/.

Accessing the OpenAI API

Signing up for the OpenAI API

Direct access to OpenAI’s models via API are available through OpenAI and Microsoft Azure. The fastest way to get started is through OpenAI. But depending on your usecase, particularly if you need enterprise-grade security, compliance or regional availability, Azure could be a more suitable option.

Here, we access OpenAI’s models through OpenAI’s services.

To access the OpenAI API, head over to the landing page for OpenAI’s products and sign up for an account if you don’t already have one by clicking “Get started”: https://openai.com/product.

API keys

After creating your account and setting up your billing details, go to the API Keys page: https://platform.openai.com/account/api-keys

To use OpenAI’s API in web apps we build, we will use a secret key so that our web app can send requests to OpenAI’s services.

Keep your OpenAI API key secret

When building web apps, it is important we heed OpenAI’s advice:

Do not share your API key with others, or expose it in the browser or other client-side code.

To avoid this advice is to risk someone using a key tied to your account to run up a large bill or otherwise do bad things.

Click the button labelled “Create new secret key”. A modal should appear containing a generated secret key. Once you close the modal, you won’t be able to see the key again, so be sure to securely store the key if you will need further reference to it. Having said that, you can create and revoke (delete) keys as you like, so it’s not the end of the world if you forget a key.

Environment variables

The gist of environment variables

We will store our OpenAI API secret key in an environment variable.

Environment variables are text key-value pairs that are stored in a file as part of our project files, but separate from the logic of our web app.

Typically, the file has a name like .env or an environment-specific name like .env.local, .env.development or .env.production. These files usually live in the parent directory of our project, rather than a subdirectory.

It’s best practice to store data like API keys in an environment variable, rather than directly embedding them in our logic.

We use environment variables in the following way. Suppose we have a file called .env with the following key:

KEY=NAME

Then we can load the key in our code as follows:

const key = process.env.KEY

The variable key would be a string with the value "NAME".

Environment variables in Next.js

In Next.js, when writing code that is deployed locally, rather than deployed in production on a live server, environment variables are stored in a file named .env.local located in our parent project directory.

And as environment variables are often used to store secrets, our project’s .gitignore file includes the line .env*.local so that we won’t accidentally upload our .env.local file and all of its secrets into a remote repository.

Importantly, environment variables that aren’t prefixed with NEXT_PUBLIC_ won’t be exposed to the browser.

Creating a .env.local file for our project

Let’s store an OpenAI API key in an environment variable for our project. Create the file .env.local in your project directory. For example, if your project is called hello-gpt, create the file hello-gpt/.env.local.

As we discussed earlier, generate a new secret key on the OpenAI website and copy its content into your clipboard. Let’s suppose the key is my-openai-api-key-1. (Realistically, it will be a long, random sequence of numbers and letters.)

Then populate .env.local with the following content

OPENAI_API_KEY=my-openai-api-key-1

Now, in our web app, we can access our secret key through process.env.OPENAI_API_KEY.

“Hello, GPT!”: Writing an endpoint for our web app using OpenAI’s ChatGPT API

Quickstart: API endpoints in Next.js

The magic of Next.js is that it allows us to use it to build both the frontend and backend of our web app. In Next.js, our backend, server-side API endpoints live in the pages/api/ directory. Each file corresponds to an endpoint. At present, there is a single file under this directory: pages/api/hello.js.

To see what happens when we send a request to this endpoint, open http://localhost:3000/api/hello in your browser. The response returned by the browser contains the following text in JSON format:

{"name":"John Doe"}

Let’s take a look at the contents of pages/api/hello.js:

export default function handler(req, res) {
  res.status(200).json({ name: 'John Doe' })
}

We have a function handler, sometimes called a route handler, with arguments for the request received by the server-side endpoint from the client-side browser, req, and the response sent back to the browser, res.

The general pattern is that the incoming request triggers the route handler, handler. Information on the request may then be used to process the response.

Here, we can see that upon receiving a request, the success HTTP status code 200 is attached to the response res. Furthermore, a JSON payload containing the object { name: 'John Doe' }is attached to the response. This corresponds to the text we see when we visit http://localhost:3000/api/hello.

Our ChatGPT-powered endpoint

Let’s write a ChatGPT-powered endpoint for our web app. Let’s replace the contents of hello.js with the following content, which is the complete logic for our endpoint:

// Logic for the `api/hello` endpoint
export default async function handler(req, res) {
  try {
    // Sending a request to the OpenAI create chat completion endpoint

    // Setting parameters for our request
    const createChatCompletionEndpointURL =
      "https://api.openai.com/v1/chat/completions";
    const promptText = `Write five variations of "Hello, World!".

Start each variation on a new line. Do not include additional information.
    
Here is an example:

Hello, World!
Bonjour, Earth!
Hey, Universe!
Hola, Galaxy!
G'day, World!`;
    const createChatCompletionReqParams = {
      model: "gpt-3.5-turbo",
      messages: [{ role: "user", content: promptText }],
    };

    // Sending our request
    const createChatCompletionRes = await fetch(
      createChatCompletionEndpointURL,
      {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          Authorization: "Bearer " + process.env.OPENAI_API_KEY,
        },
        body: JSON.stringify(createChatCompletionReqParams),
      }
    );

    // Processing the response body
    const createChatCompletionResBody = await createChatCompletionRes.json();

    // Error handling for the OpenAI endpoint
    if (createChatCompletionRes.status !== 200) {
      let error = new Error("Create chat completion request was unsuccessful.");
      error.statusCode = createChatCompletionRes.status;
      error.body = createChatCompletionResBody;
      throw error;
    }

    // Properties on the response body
    const completionText =
      createChatCompletionResBody.choices[0].message.content.trim();
    const usage = createChatCompletionResBody.usage;

    // Logging the results
    console.log(`Create chat completion request was successful. Results:
Completion: 

${completionText}

Token usage:
Prompt: ${usage.prompt_tokens}
Completion: ${usage.completion_tokens}
Total: ${usage.total_tokens}
`);

    // Sending a successful response for our endpoint
    res.status(200).json({ completion: completionText });
  } catch (error) {
    // Error handling

    // Server-side error logging
    console.log(`${error.message} Thrown error:
Status code: ${error.statusCode}
Error: ${JSON.stringify(error.body)}
`);

    // Sending an unsuccessful response for our endpoint
    res
      .status(error.statusCode || "500")
      .json({ error: { message: "An error has occurred" } });
  }
}

If you visit http://localhost:3000/api/hello, then you will see something like this, which is a variation of “Hello, World!” generated by ChatGPT:

{"completion":"Hi, Planet!\nYo, Globe!\nGreetings, Cosmos!\nAloha, Solar System!\nSalut, Terra!"}

And if you refresh the page, you will see different variations of this well-known phrase.

Next, we’ll walk through the logic for this endpoint. Here, the subsection headers loosely correspond to the comments in hello.js.

Sending a request to the OpenAI create chat completion endpoint

try-catch and async-await

For this web app, we’ll make use of the OpenAI API’s create chat completion endpoint. As we’ll be sending a request to OpenAI’s servers, we’re going to use the try-catch and async-await patterns. In our try block, we’ll try to send request to the OpenAI completion endpoint.

To send a request to the OpenAI create chat completion endpoint, we use the JavaScript Fetch API to specify the request that we want to send to OpenAI’s servers. (See the next section for more on the parameters we’ll use.)

As this involves sending a request to OpenAI’s servers, this method returns a promise. To resolve the promise, we use the async-await pattern, prefixing our handler(...) function definition with the keyword async and our fetch(...) function call with the keyword await. We store the response returned by OpenAI’s servers in the constant variable createChatCompletionRes.

Setting parameters for our request

Create chat completion endpoint parameters

There are several parameters that can be specified when using the create chat completion endpoint. We discuss the two required parameters, which we have specified in createChatCompletionReqParams:

  • model
  • messages

Optional parameters include temperature, n, stream and others, which we won’t discuss here to keep things simple. See the OpenAI API docs page on the create chat completion endpoint for a reference list of the possible parameters: https://platform.openai.com/docs/api-reference/chat.

Models

We use the latest ChatGPT model, a GPT-3.5 model, specified in our parameters as gpt-3.5-turbo. The gpt-3.5-turbo model is the most capable ChatGPT model currently made available by OpenAI. It’s training data has a cut-off date of September 2021.

There are other GPT 3.5 models such as the Davinci model (text-davinci-003). These models involve trade-offs between capability, speed and cost.

See the full list of GPT models available through the OpenAI API here: https://platform.openai.com/docs/models/gpt-3-5.

Messages

The parameter messages consists of an array of objects, which we’ll call messages. A message has the following properties:

  • A message has two parameters: role and content
  • The role parameter has three possible values: "system", "user" and "assistant"
    • "system": A system message is optional and is typically the first message. It is generally used to set the behaviour of ChatGPT
    • "user": User messages correspond to instructions given to ChatGPT either by the users or developers of an application
    • "assistant": Assistant messages correspond to replies (completions) generated by ChatGPT earlier in the conversation or to example replies provided by users or developers
  • The content parameter must be a string value

Here is an example for messages:

[
  {
    role: "system",
    content:
      "You are a helpful chatbot that answers questions and instructions in one sentence.",
  },
  { role: "user", content: "What animals can I see at the zoo?" },
  {
    role: "assistant",
    content:
      "You can see a variety of animals at the zoo including lions, tigers, bears, elephants, giraffes and many more.",
  },
  { role: "user", content: "Please provide more examples." },
]
Prompts

For our web app, to keep things simple, we will only provide a single user message.

The aim of our prompt is to generate variations of “Hello, World!”

To achieve this aim, we use a “show and tell” approach in writing our prompt:

  • We tell the model our aim: Write five variations of "Hello, World!"
  • We provide it additional instructions: Start each variation on a new line. Do not include additional information.
  • And we show it an example, Here is an example: Hello, World!, Bonjour, Earth!, Hey, Universe!, Hola, Galaxy! and G'day, World!

The OpenAI API docs include a more comprehensive discussion of prompt design in its sections on text completions (https://platform.openai.com/docs/guides/completion) and chat completions (https://platform.openai.com/docs/guides/chat).

Sending our request using the Fetch API

We send our request to OpenAI’s servers using the built-in JavaScript Fetch API. Practically speaking, the Fetch API provides us with tools to work with HTTP requests and responses, with the primary tool being the fetch(...) method.

You can learn about the Fetch API in greater detail at MDN Web Docs:

In our web app, we provide the fetch(...) method with two arguments:

  • The URL of the endpoint we want to send a request to: In this case, the url of the create chat completion endpoint URL, createChatCompletionEndpointURL, which has the string value "https://api.openai.com/v1/chat/completions"
  • An object describing the request that we would like to send: There are many options that can be specified. We specify three such options: method, headers and body

Next, we discuss the options that we specify for our request in greater detail.

Request method

We specify the HTTP method, method, as the POST method, "POST", per the OpenAI API reference for the chat completion endpoint: https://platform.openai.com/docs/api-reference/chat.

Request headers

We specify two headers, headers, for our request:

  • "Content-Type": We specify the content type as "application/json", which tells the user’s browser to expect a response containing content in a JSON format
  • Authorization: We specify our OpenAI API key as a bearer token using the following string, making use of our OPENAI_API_KEY environment variable: "Bearer " + process.env.OPENAI_API_KEY
Request body

We specify the parameters for the chat completion we would like to create using the body parameter. The body parameter takes a JavaScript object in string form, often called a JSON string.

Fortunately, we can use the JSON.stringify(...) method for this purpose, passing in our object of parameters, createChatCompletionReqParams.

Processing the response body

The completions returned by ChatGPT will be contained on the body of the response. We want to access the body of the response as a JavaScript object.

The response returned by the create chat completion endpoint, createChatCompletionRes, is an object of type Response. To access the body of the response as a JavaScript object, we can call this object’s json() method.

As this method returns as a promise, we prefix our method call with the keyword await. We store the response body in the variable createChatCompletionResBody.

Error handling for the OpenAI endpoint

Our request to OpenAI’s servers could fail for any number of reasons. For example, our API key may be invalid or our Internet connection may not be reliable.

Therefore, when preparing the response for our endpoint, /api/hello, we want to be able to send a successful response and an unsuccessful response, reflecting the success or not of our request to OpenAI’s servers.

To determine if the request to OpenAI’s server was successful or not, we can look at the status property of the response object, createChatCompletionRes.

If the request was successful, the response will have a status code of 200. Otherwise, it will have a different status code such as 400.

Therefore, once the response from OpenAI’s servers have been returned, we check the status on the response.

If it is not 200, then we create an object error of type Error. We specify several properties on this object:

  • message: We specify this by the string we pass into the object’s constructor: "Create chat completion request was unsuccessful."
  • statusCode: We pass in the status code on the response returned by OpenAI’s servers
  • body: We pass in the body of the response returned by OpenAI’s servers, as OpenAI will provide us with data about what went wrong

Note that message is a default property of objects of type Error, while statusCode and body are not.

Then, we throw the error using the throw keyword. This will cause our web app to exit the try block and enter the catch block. We will have more to say about the logic in the catch block shortly.

Properties on the response body

We already mentioned that the response body will contain useful information. Here is an example of a response body for our endpoint:

{
  id: "chatcmpl-6t8rMBDJrAr7GUW4FPh5K7bzp4jWY",
  object: "chat.completion",
  created: 1678600116,
  model: "gpt-3.5-turbo-0301",
  usage: { prompt_tokens: 58, completion_tokens: 26, total_tokens: 84 },
  choices: [
    {
      message: {
        role: "assistant",
        content:
          "\n" +
          "\n" +
          "Hi, Earth!\n" +
          "Greetings, Planet!\n" +
          "Yo, Cosmos!\n" +
          "Ni hao, Universe!\n" +
          "Aloha, World!",
      },
      finish_reason: "stop",
      index: 0,
    },
  ],
}

We are particularly interested in two properties: usage and choices, which we describe further in the following sections.

Token usage

The value of the usage property is an object that contains information on token usage. OpenAI API usage is charged on a per-token basis. At the time of publication, for the gpt-3.5-turbo model, that is at a rate of $0.002 per one thousand tokens.

The usage objects contains information on the number of tokens consumed to process our prompt (prompt_tokens) and the completion (completion_tokens), as well as the total number of tokens consumed (total_tokens).

In our web app, we store the usage property in a variable usage of the same name.

Response messages and choices

The value of the choices property is an array that contains the completion message in response to our prompt.

It is an array as in our create chat completion request, we are optionally able to indicate that we would like ChatGPT to generate multiple completions in response to our text (put another way, multiple choices), from which we could then choose the best or otherwise integrate into our application another way.

In our request, we could have specified the number of choices to generate using the n parameter. This parameter has a default value of 0.

For the purposes of our /api/hello endpoint, we are only interested in the completion text. We store the completion text in the variable completionText. Here, we have also used the string method trim() to remove the unnecessary new line characters (\n) from the start of the completion.

Logging the results

On the server side, it’s useful to implement some basic logging. For a successful response, we log in the console the completion text using the variable completionText and data on token usage using the usage variable.

Sending a successful response for our endpoint

Finally, we prepare the response for our /api/hello endpoint for a success case. We chain method calls on the res object in the following way:

  • status(200): This specifies the success status code 200 on our response
  • json({ completion: completionText }): This specifies the provided argument as the body of our response. We specify a property completion with the stored value being our completion text, completionText

Error handling

If our request to OpenAI’s servers is unsuccessful, then our web app will exit the try block after throwing the error we specified (throw error). After that, it will enter the catch block.

We have passed the thrown error, error, into our catch block, and so have access to it in our error handling logic.

Server-side error logging

In the unsuccessful case, we log the contents of error into the server-side console, which will help us debug unsuccessful requests to OpenAI’s servers.

Sending an unsuccessful response for our endpoint

Like in the success case, we end the unsuccessful case by preparing the response for our /api/hello endpoint. As before, we chain method calls on the res object, doing so in the following way:

  • status(error.statusCode || "500"): This specifies the unsuccessful status code on our error object. If it is defined, this mirrors the status code on the response returned by OpenAI’s servers. As we also want to handle unexpected errors in our catch block, if error.statusCode is undefined, then we set the status code 500, which is customarily used for handling unexpected errors
  • json({ error: { message: "An error has occurred" } }): We specify an error object on the body of our response with a message property. We keep the message simple: "An error has occurred".

A simple frontend UI for our web app

That handles the backend for our web app. Next, we’ll build a simple frontend UI for our web app.

“Hello, GPT!”: The UI of our web app on initial load.

At present, if you open http://localhost:3000/ in your browser, you’ll simply see the following text: “Hello, World!”

This corresponds to the file pages/index.js in our project. At present, it has the following contents:

import Head from "next/head";

export default function Home() {
  return (
    <>
      <Head>
        <title>"Hello, World!" app with Next.js and Tailwind CSS</title>
        <meta
          name="description"
          content='"Hello, World!" app with Next.js and Tailwind CSS'
        />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
        <link rel="icon" href="/favicon.ico" />
      </Head>
      <main>
        <h1 className="text-4xl font-bold text-blue-600 ">Hello, World!</h1>
      </main>
    </>
  );
}

We’re going to build our frontend in two steps:

  1. Replace the contents of index.js
  2. Place the necessary images we need for index.js into our public folder

First, let’s replace the contents of index.js with the code for our desired frontend:

import Head from "next/head";
import Image from "next/image";
import { useState } from "react";

export default function Home() {
  // Defining state hooks
  const [reply, setReply] = useState("");
  const [loadingStatus, setLoadingStatus] = useState(false);

  // Making a client-side request to our endpoint
  async function onSubmit(event) {
    event.preventDefault();
    setLoadingStatus(true);
    try {
      const response = await fetch("/api/hello");
      const body = await response.json();

      setReply(response.status === 200 ? body.completion : body.error.message);
    } catch {
      setReply("An error has occurred");
    }
    setLoadingStatus(false);
  }

  // Creating the UI
  return (
    <>
      <Head>
        <title>Hello, GPT!</title>
        <meta
          name="description"
          content={
            '"Hello, GPT!": A simple ChatGPT-powered app' +
            " built with Next.js and Tailwind CSS"
          }
        />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
        <link rel="icon" href="/favicon.ico" />
      </Head>
      <main className="mx-auto flex h-screen max-w-xs flex-col">
        <div className="mt-32">
          <h1 className="text-center text-6xl font-bold text-blue-300">
            Hello, GPT!
          </h1>
        </div>
        <div className="mx-auto my-6">
          <Image
            src="waving-hand.svg"
            width={120}
            height={120}
            alt="A cartoon drawing of a waving hand"
            priority
          />
        </div>
        <div className="mx-auto">
          <form onSubmit={onSubmit}>
            <button
              className="mb-3 rounded-md border-2 border-blue-600 bg-blue-600 
              px-4 py-2 hover:border-blue-700 hover:bg-blue-700"
              type="submit"
            >
              <p className="text-[20px] font-bold text-white">Say hello</p>
            </button>
          </form>
        </div>
        {loadingStatus ? (
          <div className="mx-auto mt-3">
            <Image src="three-dots.svg" width={60} height={15} />
          </div>
        ) : (
          <div className="mt-3">
            <p
              className="whitespace-pre-line text-center text-[20px] 
              font-bold text-slate-600"
            >
              {reply}
            </p>
          </div>
        )}
      </main>
    </>
  );
}

Second, we need to place the following files into the folder public:

  • three-dots.svg: We use the three-dots.svg loader from Sam Herbert’s (Twitter: @Sherb) excellent SVG Loaders project (https://samherbert.net/svg-loaders/), in which he has released a collection of SVG loaders under the MIT License. Download the repo on GitHub here: https://github.com/SamHerbert/SVG-Loaders. Navigate to svg-loaders/three-dots.svg and copy-and-paste, cut or drag-and-drop the file into the public folder of your project. In addition, by default the animation is white (with colour code #fff). We will change the colour to a blue grey. Open public/three-dots.svg and in the second line with the opening svg tag, find the following attribute: fill="#fff". Replace #fff with the colour code #94a3b8 and hit save.
  • waving-hand.svg: We use the waving hand emoji from Twitter’s Twemoji project (https://twemoji.twitter.com/). The graphics for the project are licensed under CC-BY 4.0 and the code is licensed under the MIT License. The repo is available on GitHub here: https://github.com/twitter/twemoji. But due to the large number of emojis in the collection, it can be quite difficult to navigate the repo. Instead, we can use the Twemoji Cheatsheet developed by Shahriar Khalvati (GitHub: @ShahriarKh) to find the SVG waving hand emoji: https://twemoji-cheatsheet.vercel.app/. (The SVG is also directly available here: https://cdn.jsdelivr.net/gh/twitter/twemoji@14.0.2/assets/svg/1f44b.svg.) In any case, download the SVG, rename it to waving-hand.svg and place it into the public folder of your project

At this point, if your web app hasn’t already auto-reloaded, run npm run dev. The web app should function as intended.

“Hello, GPT!”: The UI of our web app after clicking the “Say hello” button.

Next, we’re going to walkthrough the logic for pages/index.js. As before with our walkthrough of pages/api/hello.js, the subsection headers loosely correspond to the comments in index.js.

Defining state hooks

We begin by defining the state hooks we will use in our web app. State hooks help us manage the flow of data in our web app. Each state hook consists of a state variable and a hook function. We define two state hooks:

  • reply and setReply: The reply state variable will store the completion text contained in the response returned by the /api/hello endpoint when the request to OpenAI’s servers is successful. When the request isn’t successful, it will store an error message
  • loadingStatus and setLoadingStatus: The loadingStatus state variable will have a boolean value that is true while a request is sent to the /api/hello endpoint and the frontend of our web app is waiting for a response and false otherwise. While the loadingStatus state variable is true, we will show the user a loading indicator. This will substantially improve the user experience of our web app

Making a client-side request to our endpoint

Next, we make a request to the /api/hello endpoint from our frontend. We write a function onSubmit(...), that will be triggered on clicking the “Say hello” button.

When the user clicks the “Say hello” button, in web development parlance, we say that an event has been triggered. We then call the onSubmit(...) the event handler. In this and other cases, our web app will automatically pass an event object (typically denoted event or e) to our event handler, which provides us with access to additional features.

Indeed, we begin the implementation of onSubmit(...) by calling the preventDefault() method of the event argument, event. This is because our event handler is attached to a HTML form element, and we would like to disable the default behaviour of triggering a form submission.

Following that, we use the try-catch pattern to try and send a request to our /api/hello endpoint. We sandwich our try and catch blocks with hook calls setLoadingStatus(true) and setLoadingStatus(false).

In the try block, we use the fetch(...) method to send a request to our /api/hello endpoint. In this case, as we are not specifying any parameters or a body on our request, we need only pass the endpoint URL "/api/hello" to the fetch(...) method. Also, as we have not specified a HTTP method in our argument for fetch(...), our request will use the default method, GET, which suits our purposes just fine.

We store the result in the variable response and also prefix our method call with the keyword await (and our function definition with the keyword async) as is customary when using fetch(...). We store the response body in the variable body, prefixing the method call response.json() with the keyword await as well.

Next, we call the setReply(...) hook. We use the conditional ternary operator to pass in the argument that we will assign to our state variable reply. If the status code of the response (response.status), which mirrors the status code on the response to our request to OpenAI’s servers, is 200, then we pass in the completion text attached to the body of the response (body.completion) as our argument. If the status code isn’t 200, then that means the logic in our /api/hello endpoint threw an error. In this case, we set the reply to be the error message we specified (body.error.message).

If some other unexpected error occurs, control will enter the catch block. In this case, we set the reply directly as the following error message: "An error has occurred"

Creating the UI

We write the React UI using JSX, which combines aspects of HTML, CSS and JavaScript. We’ll use Tailwind CSS to style our UI.

Next, we’ll walkthrough the key parts of our UI.

The Head component

The Head component is a special component provided by Next.js. It mirrors the usage of the head element in traditional HTML. We use it to set metadata related to our web app.

You can learn more about the Head component by reading the Next.js docs: https://nextjs.org/docs/api-reference/next/head.

Structure of our main element

The main element should contain the primary content of our web app. In this case, we specify a container with the following attributes using Tailwind CSS:

  • h-screen and max-w-xs: The height of the container will be the height of the screen of the device the user is using. Since HTML elements will often automatically expand horizontally but we do not want our elements to necessarily expand to a specific width, we specify the max width as “extra small”. This also means our web app will be mobile friendly
  • mx-auto: This centers the container horizontally in the page
  • flex and flex-col: Although the container has been centred horizontally in the page, elements in the container may not be centred horizontally within the container itself. To address this, we can use flex to specify our container as a flexbox and the components and elements inside of it as flex items. By default, however, this will align our flex items along a horizontal axis. Specifying flex-col will align our flex items along a vertical axis instead

The h1 for our web app

Here, we insert the name for our web app: Hello, GPT! We add some HTML and CSS to position the element appropriately and make it stand out.

Using the Image component to show waving-hand.svg

To insert images in Next.js, we use the special Image component, rather than the traditional img HTML element. We use waving-hand.svg as the main image for our web app.

To learn more about how the Image component is used, see the Next.js docs: https://nextjs.org/docs/api-reference/next/head.

Adding interactivity: form component and onSubmit(...)

We use the HTML form element to add the main interactivity for our web app. The form element has an attribute onSubmit, which we can use to specify an event handler that is triggered when a form submission is registered. We specify our own onSubmit(...) function. The variable onSubmit is surrounded by curly brackets, as is done when specifying JavaScript variables

As a child element of the form element, we create a blue-coloured button using the button element. Importantly, we specify the attribute type with the value "submit", which will ensure the onSubmit event handler is triggered when this button is pressed. As a further child element of the button element, we use a paragraph p element to add some text onto our button: “Say hello”.

Rendering the reply state variable and using the loadingStatus state variable and three-dots.svg to improve user experience

There is a short but noticeable delay between when we send our request to OpenAI’s severs and when the response is returned and processed. This delay will generally be longer if our desired prompt and completion are longer.

To improve the user experience of our web app, we want to add a loading indicator that is shown immediately after we send our request to OpenAI’s servers and then replaced with the completion text once the response has returned and been processed.

To do this, we insert JavaScript into our component. Within curly brackets we use a ternary operator: If the state variable loadingStatus has the value true, then we display an Image component showing the loading animation three-dots.svg. Otherwise, loadingStatus is false, and in this case we display the contents of the state variable reply within a p element.

Notice here that since the default value of the state variable reply is the empty string, on initial load our web app will display this string, but it will simply be an empty string within a p element and, therefore, not be visible to the user.

Summary and next steps

We built a simple “Hello, World!”-inspired app powered by OpenAI’s ChatGPT API with Next.js and Tailwind CSS. We covered the following topics:

  • Accessing the OpenAI API, including using environment variables in Next.js
  • Writing a ChatGPT-powered endpoint (/api/hello) for our app in Next.js and using the Fetch API
  • The basics of using the gpt-3.5-turbo model and the create chat completion endpoint, including tokens and error handling
  • Creating a simple frontend UI in our app using React via Next.js and Tailwind CSS and hooking it up to our endpoint

As for next steps, this web app can server as a spring board for bringing your own ideas to life. This is as simple as editing promptText in pages/api/hello.js and updating pages/index.js to create a more relevant UI. The possibilities really are endless, and that’s incredibly exciting.

👋 Thanks for reading

A picture of the author of this blog.

The goal of this blog is to create useful content for devs and share my own learnings.

My current passion project is to help devs learn how to use the OpenAI API.

Join my email list to hear about new content. No spam, unsubscribe at any time.