Building a chatbot with OpenAI’s ChatGPT API, Next.js and Tailwind CSS


David Wu


April 16, 2023

We build a chatbot web app powered by OpenAI’s ChatGPT API with Next.js and Tailwind CSS.

You can try the version hosted on Vercel (the awesome cloud computing company founded by the creators of Next.js) here:

The full source code for the web app and quickstart instructions can be found on GitHub:

This tutorial has been written such that you can probably have your own chatbot web app running locally in the time it takes you to finish drinking a coffee. This assumes you copy-and-paste the code in this post and already have Node.js installed and your OpenAI account set up.

Having said that, please go at your own pace. To support those that’d like to go deeper, I’ve included some high-level explanations in this tutorial and plenty of comments in the provided source code.

If you’d like to read a tutorial that goes deeper into the basics of the OpenAI API, Next.js and Tailwind CSS, then you might enjoy my earlier tutorial: Building a simple web app with OpenAI’s ChatGPT API, Next.js and Tailwind CSS.

GPT Chatbot: A cropped screen recording demonstrating the functionality of the web app we’ll build.


You’ll need the following on hand:

  • Node.js version 14.6.0 or later for Next.js
  • An OpenAI API key

Head over to the official Node.js website( or OpenAI API website ( if you don’t have either of these.

Building the base web app

We use a “Hello, World” web app with Next.js and Tailwind CSS as the base for our chatbot web app.

You’ve probably built this kind of web app before, so I’ve made the code available in a public repo:

Use the following command to create a directory called gpt-chatbot in your present working directory containing the source code:

git clone gpt-chatbot

Alternatively, you can follow an earlier tutorial that I wrote that walks through building this web app: Setting up a “Hello, World!” web app with Next.js and Tailwind CSS.

In any case, once you have the source code open in your favourite IDE, install the dependencies:

npm install

Then run the web app locally:

npm run dev

Setting up your API key

Next, let’s set up our OpenAI API key as an environment variable.

If you need to generate a new secret key, you can do so the API keys page of the OpenAI API platform:

Create the file .env.local in your project directory, e.g., gpt-chatbot/.env.local, and populate it as follows:


Building the backend

For the backend, we write an endpoint /api/chat, which is available locally at http://localhost:3000/api/chat.

Our endpoint works in the following way:

  • Our endpoint receives requests with the HTTP method POST and a request body with a property messages corresponding to an array of messages.
  • Internally, the endpoint will send a request containing messages to the OpenAI API’s chat completion endpoint, which uses the model underlying ChatGPT, the gpt-3.5-turbo model.
  • In the success case, the OpenAI API will return a generated reply to the messages we sent it, which we store in the variable reply.
  • The variable reply is then returned on the body of the response of our /api/chat endpoint.
  • We also include error handling logic to resolve cases where things go wrong.

Code and commands for the backend

To proceed, we rename the file located at pages/api/hello.js to chat.js. Either do this manually or using the following command:

mv pages/api/hello.js pages/api/chat.js

Then, we replace the placeholder code within it with the following content:

// Logic for the ChatGPT-powered `/api/chat` endpoint
export default async function handler(req, res) {
  try {
    // Thrown an error if the request does not have the POST method
    if (req.method !== "POST") {
      let error = new Error("Request does not have the POST method.");
      error.statusCode = 405;
      error.body = { error: { reason: "Method not allowed" } };
      throw error;

    // Processing the request body
    const messages = req.body.messages;

    // Sending a request to the OpenAI create chat completion endpoint

    // Setting parameters for our request
    const createChatCompletionEndpointURL =
    const createChatCompletionReqParams = {
      model: "gpt-3.5-turbo",

    // Sending our request using the Fetch API
    const createChatCompletionRes = await fetch(
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          Authorization: "Bearer " + process.env.OPENAI_API_KEY,
        body: JSON.stringify(createChatCompletionReqParams),

    // Processing the response body
    const createChatCompletionResBody =
      await createChatCompletionRes.json();

    // Error handling for the OpenAI endpoint
    if (createChatCompletionRes.status !== 200) {
      let error = new Error(
        "Create chat completion request was unsuccessful."
      error.statusCode = createChatCompletionRes.status;
      error.body = createChatCompletionResBody;
      throw error;

    // Properties on the response body
    const reply = createChatCompletionResBody.choices[0].message;
    const usage = createChatCompletionResBody.usage;

    // Logging the results
    console.log(`Create chat completion request was successful. Results:
Replied message: 


Token usage:
Prompt: ${usage.prompt_tokens}
Completion: ${usage.completion_tokens}
Total: ${usage.total_tokens}

    // Sending a successful response for our endpoint
    res.status(200).json({ reply });
  } catch (error) {
    // Error handling

    // Server-side error logging
    console.log(`Thrown error: ${error.message}
Status code: ${error.statusCode}
Error: ${JSON.stringify(error.body)}

    // Sending an unsuccessful response for our endpoint
    res.status(error.statusCode || "500").json({
      error: {
        reply: {
          role: "assistant",
          content: "An error has occurred.",

Building the frontend

For our frontend, we want a chat UI that enables the user to write and submit messages, send requests containing user-written messages to the backend and receives responses containing replies generated by the ChatGPT API, all the while updating the UI.

The logic of the frontend works as follows:

  • We keep track of state variables messages, newMessageText and loadingStatus.
  • We define three event handlers:
    1. onChange: Updates newMessageText as the user types their new message.
    2. onSubmit: Triggered when the user submits a new message and sets off logic to update the UI with the user’s new message, set loadingStatus to true, and reset newMessageText to the empty string.
    3. onKeyDown: Enables response to be submitted using the return key.
  • We define two useEffect hooks:
    1. The first hook is triggered when loadingStatus changes. Within this hook, we define and use a function fetchReply which will call our /api/chat endpoint with messages as a parameter, return a reply generated by the OpenAI API, and appropriately update the state variables.
    2. The second hook is triggered when newMessageText changes. It is used to automatically adjust the height of the message input box as the user types. This hooks makes use of three refs: textareaRef, backgroundRef and whitespaceRef.

Code and commands for the frontend

As for the implementation, we first need to install the react-markdown and remark-gfm packages. The OpenAI API uses markdown syntax for code blocks and tables. These packages enable us to parse markdown. Run the following command:

npm install react-markdown remark-gfm

Then, replace the placeholder content within index.js with the following content:

import Head from "next/head";
import { useState, useEffect, useRef } from "react";

// Used to parse message contents as markdown
import ReactMarkdown from "react-markdown";
import remarkGfm from "remark-gfm";

export default function Home() {
  // State variables
  const [messages, setMessages] = useState([
      role: "system",
      content: "You are a chatbot that is helpful and replies concisely",
  ]); // An array of the messages that make up the chat
  const [newMessageText, setNewMessageText] = useState("");
  const [loadingStatus, setLoadingStatus] = useState(false);

  // `onChange` event handler to update `newMessageText` as the user types
  const onChange = (event) => {

  // `onClick` event handler to create a new chat
  const onClick = () => {
        role: "system",
        content: "You are a chatbot that is helpful and replies concisely",

  // `onSubmit` event handler fired when the user submits a new message
  const onSubmit = async (event) => {
    setMessages([...messages, { role: "user", content: newMessageText }]);

  // `onKeyDown` event handler to send a message when the return key is hit
  // The user can start a new line by pressing shift-enter
  const onKeyDown = (event) => {
    if (event.keyCode == 13 && event.shiftKey == false) {

  // Effect hook triggered when `loadingStatus` changes
  // Triggered on form submission
  useEffect(() => {
    // Function that calls the `/api/chat` endpoint and updates `messages`
    const fetchReply = async () => {
      try {
        // Try to fetch a `reply` from the endpoint and update `messages`
        const response = await fetch("/api/chat", {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          body: JSON.stringify({ messages }),

        const responseBody = await response.json();
        const reply =
          response.status === 200
            ? responseBody.reply
            : responseBody.error.reply;

        setMessages([...messages, reply]);
      } catch {
        // Catch and handle any unexpected errors
        const reply = {
          role: "assistant",
          content: "An error has occured.",

        setMessages([...messages, reply]);
      // Set `setLoadingStatus` to false

    // `fetchReply` executes only if a new message has been submitted
    // `setLoadingStatus(false)` triggers the hook again
    // No action occurs the second time because of the condition
    if (loadingStatus === true) {
  }, [loadingStatus]);

  // Logic for auto-adjusting the textarea height as the user types
  // Ref variables
  const textareaRef = useRef(null);
  const backgroundRef = useRef(null);
  const whitespaceRef = useRef(null);

  // Effect hook triggered when `newMessageText` changes
  useEffect(() => {
    // Set the textarea height to 0 px for an instant
    // Triggers scroll height to be recalculated
    // Otherwise, the textarea won't shrink = "0px";

    const MAX_HEIGHT = 320;
    const HEIGHT_BUFFER = 4;
    const VERTICAL_SPACING = 20;

    const textareaContentHeight =
      textareaRef.current.scrollHeight + HEIGHT_BUFFER;

    const textareaHeight = Math.min(textareaContentHeight, MAX_HEIGHT); = textareaHeight + "px"; =
      textareaHeight + 2 * VERTICAL_SPACING + "px"; =
      textareaHeight + 2 * VERTICAL_SPACING + "px";
  }, [newMessageText]);

  return (
        <title>GPT Chatbot</title>
            "GPT Chatbot: A simple ChatGPT-powered chatbot" +
            " built with Next.js and Tailwind CSS"
          content="width=device-width, initial-scale=1"
        <link rel="icon" href="/favicon.ico" />

      <main className="mx-auto h-screen max-w-full sm:max-w-3xl">
        <div className="py-8">
          <h1 className="text-center text-6xl font-bold text-blue-500">
            GPT Chatbot

        {messages.length === 1 && (
          <div className="mx-10 mt-20 flex justify-center">
              <p className="mb-2 font-bold">
                GPT Chatbot is a basic chatbot built with the OpenAI API,
                Next.js and Tailwind CSS
              <p className="mb-32">
                To start a conversation, type a message below and hit send

          {messages.slice(1).map((message, index) => (
            <div className="my-4 mx-2" key={index.toString()}>
              <p className="font-bold">
                {message.role === "assistant" ? "GPT Chatbot" : "You"}
              <ReactMarkdown remarkPlugins={[remarkGfm]}>

        {loadingStatus && (
          <div className="mx-2 mt-4">
            <p className="font-bold">GPT Chatbot is replying...</p>

        {!loadingStatus && messages.length > 1 && (
          <div className="mt-4 flex justify-center">
              className="h-11 rounded-md border-2 border-gray-500
                         bg-gray-500 px-1 py-1 hover:border-gray-600 
              <p className="font-bold text-white">New chat</p>

        <div ref={whitespaceRef} className="z-0"></div>
          className="fixed bottom-0 z-10 w-full max-w-full bg-white/75

          className="fixed bottom-5 z-20 w-full max-w-full 
          <form className="mx-2 flex items-end" onSubmit={onSubmit}>
              className="mr-2 grow resize-none rounded-md border-2 
                       border-gray-400 p-2 focus:border-blue-600 
              placeholder="Why is the sky blue?"

            {loadingStatus ? (
                className="h-11 rounded-md border-2 border-blue-400
                         bg-blue-400 px-1 py-1"
                <p className="font-bold text-white">Send</p>
            ) : (
                className="h-11 rounded-md border-2 border-blue-600
                         bg-blue-600 px-1 py-1 hover:border-blue-700 
                <p className="font-bold text-white">Send</p>

Summary and next steps

That’s all there is to it. At this point you should have a basic chatbot built with the OpenAI ChatGPT API, Next.js and Tailwind CSS running locally.

As for next steps, my hope is that this web app can serve as a spring board for bringing your own ideas to life.

👋 Thanks for reading

A picture of the author of this blog.

The goal of this blog is to create useful content for devs and share my own learnings.

My current passion project is to help devs learn how to use the OpenAI API.

Join my email list to hear about new content. No spam, unsubscribe at any time.