Building a Real-Time AI Chat Interface with JavaScript and OpenAI

Ramkumar Khubchandani
5 min readDec 31, 2024

--

AI chat interfaces are everywhere these days, but have you wondered how to build one yourself? In this guide, we’ll create a modern AI chat interface that streams responses in real-time using JavaScript and the OpenAI API. We’ll build something similar to any chatboat interface, complete with streaming responses and a clean UI.

What We’ll Build

Our chat application will feature:

  • Real-time streaming responses using Server-Sent Events (SSE)
  • Modern, responsive UI with typing indicators
  • Message history management
  • Code syntax highlighting
  • Markdown support

Prerequisites

Before starting, you’ll need:

  • Node.js installed on your computer
  • An OpenAI API key (get one at platform.openai.com)
  • Basic knowledge of JavaScript and HTML/CSS
  • A code editor

Step 1: Setting Up the Project

First, let’s create our project and install dependencies:

mkdir ai-chat-interface
cd ai-chat-interface
npm init -y
npm install express openai marked highlight.js dotenv

Create a .env file in your project root:

OPENAI_API_KEY=your_api_key_here

Step 2: Building the Backend

Create server.js for our Express backend:

require('dotenv').config();
const express = require('express');
const openai = require('openai');
const path = require('path');

const app = express();
const port = 3000;

// Initialize OpenAI client
const client = new openai.OpenAI(process.env.OPENAI_API_KEY);

app.use(express.json());
app.use(express.static('public'));

// Route for chat completions with streaming
app.post('/chat', async (req, res) => {
const { messages } = req.body;

// Set headers for SSE
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');

try {
const stream = await client.chat.completions.create({
model: "gpt-3.5-turbo",
messages: messages,
stream: true,
});

for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
if (content) {
// Send chunk to client
res.write(`data: ${JSON.stringify({ content })}\n\n`);
}
}

res.write('data: [DONE]\n\n');
} catch (error) {
console.error('Error:', error);
res.write(`data: ${JSON.stringify({ error: 'An error occurred' })}\n\n`);
} finally {
res.end();
}
});

app.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
});

Step 3: Creating the Frontend

Create a public folder and add index.html:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Chat Interface</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.8.0/styles/github-dark.min.css">
<link rel="stylesheet" href="styles.css">
</head>
<body>
<div class="chat-container">
<div class="chat-messages" id="chatMessages"></div>
<div class="chat-input-container">
<textarea
id="userInput"
placeholder="Type your message here..."
rows="1"
class="chat-input"
></textarea>
<button id="sendButton" class="send-button">Send</button>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/marked/9.1.2/marked.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.8.0/highlight.min.js"></script>
<script src="app.js"></script>
</body>
</html>

Create public/styles.css:

body {
margin: 0;
padding: 0;
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
background: #f0f2f5;
}

.chat-container {
max-width: 800px;
margin: 20px auto;
background: white;
border-radius: 10px;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1);
height: 90vh;
display: flex;
flex-direction: column;
}

.chat-messages {
flex-grow: 1;
overflow-y: auto;
padding: 20px;
}

.message {
margin-bottom: 20px;
padding: 10px 15px;
border-radius: 8px;
max-width: 80%;
}

.user-message {
background: #007AFF;
color: white;
margin-left: auto;
}

.ai-message {
background: #f0f0f0;
color: #333;
}

.chat-input-container {
padding: 20px;
border-top: 1px solid #eee;
display: flex;
gap: 10px;
}

.chat-input {
flex-grow: 1;
padding: 12px;
border: 1px solid #ddd;
border-radius: 8px;
resize: none;
font-family: inherit;
font-size: 14px;
}

.send-button {
padding: 12px 24px;
background: #007AFF;
color: white;
border: none;
border-radius: 8px;
cursor: pointer;
font-size: 14px;
}

.send-button:hover {
background: #0056b3;
}

.typing-indicator {
padding: 20px;
color: #666;
font-style: italic;
}

pre code {
border-radius: 6px;
padding: 15px !important;
}

Create public/app.js:

class ChatInterface {
constructor() {
this.messages = [];
this.messageContainer = document.getElementById('chatMessages');
this.userInput = document.getElementById('userInput');
this.sendButton = document.getElementById('sendButton');

this.setupEventListeners();
this.setupMarkdown();
}

setupEventListeners() {
this.sendButton.addEventListener('click', () => this.sendMessage());
this.userInput.addEventListener('keypress', (e) => {
if (e.key === 'Enter' && !e.shiftKey) {
e.preventDefault();
this.sendMessage();
}
});


this.userInput.addEventListener('input', () => {
this.userInput.style.height = 'auto';
this.userInput.style.height = this.userInput.scrollHeight + 'px';
});
}

setupMarkdown() {
marked.setOptions({
highlight: function(code, lang) {
if (lang && hljs.getLanguage(lang)) {
return hljs.highlight(code, { language: lang }).value;
}
return hljs.highlightAuto(code).value;
},
breaks: true
});
}

addMessage(content, isUser = false) {
const messageDiv = document.createElement('div');
messageDiv.className = `message ${isUser ? 'user-message' : 'ai-message'}`;

if (isUser) {
messageDiv.textContent = content;
} else {
messageDiv.innerHTML = marked.parse(content);

messageDiv.querySelectorAll('pre code').forEach((block) => {
hljs.highlightBlock(block);
});
}

this.messageContainer.appendChild(messageDiv);
this.scrollToBottom();
}

addTypingIndicator() {
const indicator = document.createElement('div');
indicator.id = 'typingIndicator';
indicator.className = 'typing-indicator';
indicator.textContent = 'AI is typing...';
this.messageContainer.appendChild(indicator);
this.scrollToBottom();
}

removeTypingIndicator() {
const indicator = document.getElementById('typingIndicator');
if (indicator) {
indicator.remove();
}
}

scrollToBottom() {
this.messageContainer.scrollTop = this.messageContainer.scrollHeight;
}

async sendMessage() {
const content = this.userInput.value.trim();
if (!content) return;


this.addMessage(content, true);
this.messages.push({ role: 'user', content });


this.userInput.value = '';
this.userInput.style.height = 'auto';


this.addTypingIndicator();

try {

const eventSource = new EventSource('/chat');
let aiResponse = '';

eventSource.onmessage = (event) => {
if (event.data === '[DONE]') {
eventSource.close();
this.removeTypingIndicator();
this.messages.push({ role: 'assistant', content: aiResponse });
return;
}

const data = JSON.parse(event.data);
aiResponse += data.content;


const lastMessage = this.messageContainer.lastElementChild;
if (lastMessage && lastMessage.classList.contains('ai-message')) {
lastMessage.remove();
}


this.addMessage(aiResponse);
this.scrollToBottom();
};

eventSource.onerror = (error) => {
console.error('SSE Error:', error);
eventSource.close();
this.removeTypingIndicator();
};

} catch (error) {
console.error('Error:', error);
this.removeTypingIndicator();
this.addMessage('Sorry, an error occurred. Please try again.');
}
}
}


const chat = new ChatInterface();

Step 4: Running the Application

  1. Start the server:
node server.js

2. Open your browser to http://localhost:3000

Key Features added

1. Real-Time Streaming

We use Server-Sent Events (SSE) instead of WebSocket because:

  • It’s simpler to implement for one-way streaming
  • Built-in reconnection handling
  • Native browser support
  • Works well with OpenAI’s streaming API

2. Message Rendering

  • Markdown support using marked.js
  • Code syntax highlighting with highlight.js
  • Real-time updates as chunks arrive
  • Auto-scrolling to latest messages

3. UI/UX Features

  • Responsive textarea that expands with content
  • Clean, modern design with proper spacing
  • Visual distinction between user and AI messages
  • Typing indicator during AI response
  • Support for code blocks and formatting

Potential Enhancements

  1. Message Persistence

saveMessages() {
localStorage.setItem('chatHistory', JSON.stringify(this.messages));
}

loadMessages() {
const saved = localStorage.getItem('chatHistory');
if (saved) {
this.messages = JSON.parse(saved);
this.messages.forEach(msg =>
this.addMessage(msg.content, msg.role === 'user')
);
}
}

2. Error Handling

// Add to server.js
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});

You now have a professional-grade AI chat interface with real-time streaming responses! This implementation provides a solid foundation that you can build upon for your specific needs. Some ideas for expansion:

  • Add user authentication
  • Implement different AI models or personas
  • Add conversation branching
  • Save chat histories to a database
  • Add voice input/output support

Remember to handle you

Do You want to learn more like above content ?

Follow me or message me on Linkedin.

--

--

Ramkumar Khubchandani
Ramkumar Khubchandani

Written by Ramkumar Khubchandani

Frontend Developer|Technical Content Writer|React|Angular|React-Native|Corporate Trainer|JavaScript|Trainer|Teacher| Mobile: 7709330265|ramkumarkhub@gmail.com

No responses yet