Building a Web Component-Based Chatbot with GPT-3.5-turbo

What you will learn
- What are the prerequisites for creating a custom chatbot interface using web components?
- To create a custom chatbot interface, you should have familiarity with JavaScript and the Shadow DOM, as well as basic knowledge of OpenAI's API.
- How does the ChatBot class manage chat operations?
- The ChatBot class manages chat operations by encapsulating the state of the chatbot, including chat visibility, messages, and processing responses from the OpenAI API through various methods like sendMessage() and render().
- What purpose do token limitations serve in the chatbot implementation?
- Token limitations, defined by constants like TOKEN_LIMIT and SPECIAL_TOKEN_BUFFER, help manage the maximum number of tokens processed to ensure efficient API calls while preventing potential errors related to exceeding token limits.
- How do component lifecycle callbacks improve the chatbot's functionality?
- Lifecycle callbacks like connectedCallback and disconnectedCallback enhance the chatbot's functionality by managing UI rendering and cleanup processes when the chatbot is added or removed from the DOM, ensuring efficient resource usage.
- What is the role of event listeners in the chatbot’s interaction with users?
- Event listeners are essential for handling user interactions, such as toggling the chat window, closing it, sending messages, and responding to key events, allowing the chatbot to provide a responsive user experience.
Creating a custom chatbot interface using web components allows for modular and reusable design. In this tutorial, I’ll guide you through building a chatbot leveraging OpenAI’s powerful GPT-3.5-turbo model. All of the code for this blog post is available here.
Prerequisites
- Familiarity with JavaScript and the Shadow DOM
- Basic knowledge of OpenAI’s API
Overview
We’ll start by defining SVG icons, setting up token limitations, and building the main ChatBot
class. The ChatBot class will have helper methods and event listeners to manage chat operations.
1. Defining SVG Icons
Before diving into the main component, we first define SVG elements for our chat and close icons. These icons enhance user interaction:
const chatIcon = `...`
const closeIcon = `...`
2. Setting Up Token Limitations
Next, for effective management of tokens and API calls, we’ll define some constants:
const TOKEN_LIMIT = 1000
const SPECIAL_TOKEN_BUFFER = 10
TOKEN_LIMIT
: The maximum number of tokens we want to handle.SPECIAL_TOKEN_BUFFER
: A small buffer to account for any special tokens.
3. Building the ChatBot Class
Our main component is the ChatBot
class. It manages the chat’s state, UI, and interactions:
class ChatBot extends HTMLElement {
...
}
3.1. Initializing Component State
Inside the constructor, we initialize our chatbot’s state. This includes the chat window’s visibility (isOpen
), the chat messages (messages
), the chatbot’s thinking status, and more.
constructor() { ... }
3.2. Styling the Component
The get styles()
method returns a template literal containing the CSS to style our chatbot:
get styles() { ... }
3.3. Estimating Tokens
We use a simple method to estimate the number of tokens a message might consume:
estimateTokens(message) { ... }
By using a rough estimation instead of a precise token count, this ChatBot becomes more flexible to integrate with a wider variety of LLMs, since all LLMs’ token creation is directly positively correlated to the number of words in a message passed to the given LLM.
3.4. Component Lifecycle Callbacks
connectedCallback
and disconnectedCallback
handle rendering and cleanup operations when the component gets attached or detached from the DOM:
connectedCallback() { ... }
disconnectedCallback() { ... }
3.5. Chat Operations
Several methods manage chat operations, from toggling the chat window to sending messages and rendering the UI:
toggleChat()
: Opens or closes the chat window.sendMessage(message)
: Sends a message and handles the API response.render()
: Updates the component’s UI.
3.6. Event Handlers
Event listeners and corresponding handlers ensure the chatbot responds to user interactions:
handleToggleChatClick()
: Toggles the chat window.handleCloseButtonClick()
: Closes the chat.handleSendButtonClick()
: Handles the send button click.handleInputKeydown()
: Sends a message when the Enter key is pressed.
4. Registering the Web Component
Finally, we define and register our web component:
if (!customElements.get("chat-bot")) {
customElements.define("chat-bot", ChatBot)
}
Example Integration
With our ChatBot defined, here’s a simple HTML file demonstrating how to use it, assuming ChatBot.js
is located next to this html file, inside the public folder:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Chat Bot Example</title>
</head>
<body>
<chat-bot></chat-bot>
<script type="module">
import "./ChatBot.js"
</script>
</body>
</html>
And a simple Express server to illustrate what a backend could look like:
import express, { Request, Response } from "express"
import OpenAI from "openai"
import dotenv from "dotenv"
dotenv.config() // Load environment variables from .env file
const app = express()
app.use(express.json())
const OPENAI_API_KEY = process.env.OPENAI_API_KEY
const openai = new OpenAI({
apiKey: OPENAI_API_KEY,
})
app.use(express.static("public"))
app.post("/api/chat", async (req: Request, res: Response) => {
try {
const gptResponse = await openai.chat.completions.create({
// OpenAI offers more models, and choosing a different model is as easy as
// changing this string specifier
model: "gpt-3.5-turbo",
messages: req.body.messages,
// example number of tokens, you can play with this number for your needs.
max_tokens: 150,
})
if (gptResponse && gptResponse.choices && gptResponse.choices.length > 0) {
res.status(200).json({ content: gptResponse.choices[0].message.content })
} else {
res
.status(500)
.json({ content: "Failed to get a response from the model." })
}
} catch (error) {
console.error("Error calling OpenAI:", error)
res.status(500).json({ content: "Internal server error." })
}
})
const PORT = process.env.PORT || 3000
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`)
})
Conclusion
With this structure, we’ve built a modular chatbot component using web components. This chatbot is easily integrated into any web application and offers a customizable interface to interact with the GPT-3.5-turbo model or any LLM you have available through an API. The full code of this blog post is available at my github repository.