![Woman in focus working on software development remotely on laptop indoors.](https://images.pexels.com/photos/1181288/pexels-photo-1181288.jpeg?auto=compress&cs=tinysrgb&w=1920)
The OpenAI Next.js Template is a game-changer for building production-ready chat applications. This template provides a solid foundation for developers to create scalable and efficient chatbots using Next.js and OpenAI's cutting-edge technology.
With the OpenAI Next.js Template, developers can leverage the power of OpenAI's API to integrate advanced AI capabilities into their chat applications. This includes features like natural language processing, text generation, and more.
The template includes a pre-built example of a chat application that demonstrates how to use OpenAI's API to power a conversational interface. This example showcases the template's capabilities and provides a starting point for developers to build their own custom chat applications.
By using the OpenAI Next.js Template, developers can save time and effort by leveraging a proven and tested solution that's already optimized for production use.
Explore further: What Is Azure Open Ai
Project Setup
To set up a new Next.js project, use the command to get started. This command will guide you through the project setup process.
You'll be asked a few questions in regard to the project setup, so be prepared to answer them. The setup process is relatively straightforward.
Specify a name for the new project, which will also be the name of the folder containing the initial project setup. For this example, we'll choose the name next-openai-app.
The setup project will install all default project dependencies, and you should see a success message at the end.
Understanding the Starter
To get started with the OpenAI Next.js template, you'll need to clone the repo. This will give you a solid foundation to work from.
The Readme in the repo is your next stop, where you'll find instructions on how to install dependencies and add your API key as an environment variable. This is an important step, so don't skip it.
The frontend of the template is straightforward, with a form that makes a fetch request to the API when submitted. This is handled by Next.js's API routes, which are triggered by routes that begin with /api.
In this case, the request to /api/generate is handled by the file /pages/api/generate.js. This is where the magic happens, as it makes a request to OpenAI's completion API with the prompt seen in the template.
You can change the prompt to suit your needs, as an example, you can change it to be: “Suggest three names for my new ${category} startup. It should convey ${properties}. It should be a compound word.”
Implementing the API
To implement the API, we need to create a new API endpoint in our Next.js project. This endpoint will handle the communication with the OpenAI API.
The API endpoint is created by creating a new file called pages/api/openai.js. In this file, we import the OpenAIApi and Configuration modules from the openai package.
We then create a Configuration object and use it to create an instance of the OpenAIApi. The exported async function handles an incoming HTTP request and response.
Expand your knowledge: Openai vs Azure Openai
If the request body contains a property named "prompt", the function calls the "createCompletion" method on the OpenAI API instance, passing an object that contains the model ID "text-davinci-003" and the prompt text from the request body.
The function then sends a JSON response with a status code of 200 and a text property containing the first choice of the completion returned by the OpenAI API.
If the request body does not contain a prompt, the function sends a JSON response with a status code of 400 and a text property with the message "No prompt provided".
To test the API endpoint, we start the web server in development mode by entering "npm run dev" in the terminal.
Front-End Implementation
The front-end implementation of the OpenAI Next.js template is where the magic happens.
You can start by setting up the template by running `npx create-next-app my-app --template openai`. This will give you a basic structure to work with.
The template includes a pre-configured OpenAI API key, which you can find in the `.env` file. Make sure to replace the placeholder with your actual API key.
The template also includes a basic layout component, which you can find in the `components/Layout.js` file. This component sets up the basic HTML structure of the page, including the header and footer.
To get started with rendering AI-generated text, you can use the `useOpenAI` hook, which is imported from the `@openai/nextjs` package. This hook provides a simple way to interact with the OpenAI API.
The `useOpenAI` hook returns an object with a `generate` method, which you can use to generate text based on a prompt. You can find an example of how to use this hook in the `pages/index.js` file.
Explore further: Why Use Next Js
Testing and Chat Service
Testing the application is a straightforward process. Open http://localhost:3000/ in a browser to see the front-end.
You should see a front-end similar to the one described, with an input field and a "Get Response" button. Clicking on this button sends the input to OpenAI's language model.
Once the response from OpenAI is received, the text provided by OpenAI is displayed.
Testing the Application
To test our chat service, open a browser and navigate to http://localhost:3000/. This is the URL where our application is hosted.
You should see a front-end similar to a simple chat interface, with an input field and a button labeled "Get Response".
Start entering your question or query in the input field. The application is designed to accept user input and send it to the chat service for processing.
Click on the "Get Response" button to send the input to the chat service. This will initiate the process of retrieving a response from the service.
Once the response is received, the text provided by the chat service will be displayed on the screen. This is where you'll see the outcome of your query.
Suggestion: Query in Nextjs
Chat Service
The Chat Service is a crucial component of our application, responsible for handling operations related to chats. It fetches the list of chats for a given room from the Supabase database, ordered by creation date.
The Chat Service has three main functions: fetchChats: Fetches the list of chats for a given room from the Supabase database, ordered by creation date.postChat: Posts a new chat message to the database.getAnswer: Gets an answer from the chatbot based on the user’s chat message.
The getAnswer function uses the Langchain library to retrieve the most relevant documents from the vector store. This is where the magic happens, generating a response using OpenAI’s language model.
The Chat Service is built using NextJS, a popular framework for building server-side rendered and statically generated React applications.
Check this out: Supabase Nextjs Auth
Data Storage and Search
To set up our search functionality, we need to store our data in a way that allows for efficient querying. This is where Supabase comes in, allowing us to vectorize our data using OpenAI Embeddings.
We can convert our existing data into vectors using OpenAI Embeddings, which is a one-time operation. To do this, we'll add a button to the page that sends our celebrities array to the /api/load route, where it will be vectorized and stored in Supabase.
The vectorized data is then stored in a Supabase database, which we can access using the fetchCelebrities function. This function fetches our celebrities data from Supabase every time the application loads.
Here's a summary of the data storage process:
- Vectorize existing data using OpenAI Embeddings
- Store vectorized data in Supabase database
- Fetch data from Supabase using fetchCelebrities function
With our data stored in Supabase, we can now implement vector search on the frontend. This involves updating our search input to use the vector search functionality instead of regular array filtration.
Vector Search on Frontend
Vector search on the frontend is a crucial step in making your application more user-friendly and efficient. To implement it, you need to fetch the vectorized data from Supabase, which we've already set up.
The first step is to define a function called `fetchCelebrities` that fetches the vectorized data from Supabase. This function will be used to retrieve the data every time the application loads.
To implement vector search, you need to update the search input to send the input text to the `api/search` route. This route will perform the vector search functionality and return the result to update the UI.
Here's an overview of the operations performed on the search query:
- Generate vector embeddings for the search query using OpenAI
- Make an RPC call to Supabase with the embeddings, specifying the SQL search function to run, the similarity threshold, and the match count
The SQL search function is a stored procedure in Supabase that performs a vector similarity search in the celebrities table. It returns an array of matched celebrities along with their details and similarity scores.
Here's a simplified version of the SQL stored procedure:
```sql
CREATE OR REPLACE FUNCTION vector_search(
p_embeddings TEXT[],
p_threshold REAL,
p_match_count INTEGER
)
RETURNS TABLE (
id SERIAL,
name TEXT,
similarity REAL
) AS $$
BEGIN
RETURN QUERY
SELECT id, name, similarity FROM (
SELECT id, name, similarity, ROW_NUMBER() OVER (ORDER BY similarity DESC) AS row_num
FROM (
SELECT id, name, similarity FROM (
SELECT id, name, similarity FROM celebrities
WHERE similarity > p_threshold
) AS subquery
ORDER BY similarity DESC
) AS subquery2
) AS subquery3
WHERE row_num <= p_match_count;
END;
$$ LANGUAGE plpgsql;
```
This stored procedure takes three parameters: `p_embeddings`, `p_threshold`, and `p_match_count`. It returns a table with the matched celebrities, their similarity scores, and their row numbers.
Setting Up Regular Search
Setting up regular search is a crucial part of any data storage system, and it's surprisingly straightforward. To set up search functionality, you need to identify the field in your data objects that you want to search against, such as a name field.
In the example provided, the search input is set up to show relevant results based on a provided query by searching against a particular field in each celebrity object, specifically the name field. This is done by updating the handleSubmit event handler.
You can map over the search results in your template to display all the celebrities by default and only show the celebrities from the search results. This is exactly what was done in the example, where the searchResults variable is mapped over in the template.
Sources
- https://www.propelauth.com/post/production-ready-openai-gpt-template
- https://codingthesmartway.com/a-beginners-guide-to-building-a-nextjs-app-with-the-openai-api/
- https://blog.gopenai.com/building-a-next-js-chatbot-chatgpt-with-your-own-data-using-langchain-openai-and-supabase-a770e3fa9163
- https://blog.logrocket.com/openai-vector-search-next-js-supabase/
- https://egghead.io/lessons/node-js-stream-openai-chat-completions-to-a-next-js-app-using-vercel-edge-functions
Featured Images: pexels.com