Building a Real-Time Chat App with WebSockets in 5 hours

Technologies used for building the Real-Time Chat App

A few days ago I decided to challenge myself by coding a simple chat application using WebSockets.

Part of the motivation was to better understand the challenges behind building a real-time communication application. At the same time, I wanted to push myself further on the infrastructure side of things by deploying everything on AWS and experimenting with a few DevOps best practices I’ve been wanting to explore more deeply.

For this project, I heavily relied on GitHub Copilot combined with senior-level engineering judgment to guide the AI, validate decisions and refine the output along the way.

The result? A fully functional real-time chat application built in the span of an afternoon, containerized with Docker, deployed on AWS, and integrated into a fully automated delivery pipeline.

The Tech Stack

For this project, I wanted a stack that was lightweight enough to build in an afternoon but robust enough to be "production-ready", so I chose a modern, scalable stack that balances development speed with production-ready reliability.

For state management I used Pinia, which allowed me to keep the chat state separate from the UI components, making the WebSocket integration much cleaner. On the infrastructure side, Nginx was crucial for handling the WebSocket handshake smoothly while serving the Vue SPA from the same entry point.

To enter more in details, these are the technologies I used:

  • Frontend: Vue.js 3 (Composition API) for the reactive UI, SCSS for custom styling, and Pinia for centralized state management

  • Backend: A Node.js environment to handle the logic and connection state

  • Real-Time Layer: Socket.io to manage the persistent bi-directional communication between the client and server

  • AI Build Tool: GitHub Copilot assisted with boilerplate code and logic refinement

For DevOps & Deployment:

  • Docker: For containerizing both the frontend and backend

  • GitHub Actions: To automate the CI/CD pipeline

  • AWS (EC2): Hosting the application on a virtual server

  • Nginx: Serving the static frontend and acting as a reverse proxy for WebSocket traffic

1. Why WebSockets

While standard REST APIs work for many things, chat requires low latency and a bi-directional flow.

Thanks to Michael Carter and Ian Hickson, in 2008 WebSockets were born: a technology providing a full-duplex channel communication over a single, long-lived TCP connection.

Unlike HTTP's request-response model, WebSockets enable instant, low-latency data exchange, making the ideal candidate for real-time applications like chats, games, and dashboards.

Key Aspects of WebSockets

  • Protocol: Uses ws:// (unencrypted) or wss:// (secure) protocols, operating over ports 80 or 443

  • Handshake: Initiated via a standard HTTP request, which is upgraded to a WebSocket connection, allowing it to bypass firewalls

  • Real-time Interaction: Enables servers to push data to clients immediately, eliminating the overhead of repeated HTTP polling

  • Persistence: The connection remains open, eliminating the overhead of establishing a new connection for every interaction

  • Use Cases: Essential for live chat, collaborative editing, gaming, and IoT updates

How it Works

  1. Handshake: The client sends an HTTP request with an Upgrade header to the server

  2. Connection: The server accepts the request with a 101 Switching Protocols response, upgrading to the WebSocket protocol

  3. Communication: A persistent, two-way connection is established, allowing data to flow freely in both directions until closed

2. The Backend (Node.js + Socket.io)

I began my quest with a first simple, generic prompt:

I want to build a chat app using WebSockets. Let's start by creating a simple Node.js server in /server, handling multiple connections.

This AI promptly created a fully-working Node.js server.

As always, I reviewed the output to ensure code matched what I wanted. Hours later, I added a name-change feature on client connection and decided to swap the raw ws implementation for socket.io to gain built-in reconnection logic and easier event handling.

For now, let's see the important sections (full source code here):

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
import { createServer } from 'node:http'
import { Server } from 'socket.io'
import express from 'express'

...

// Main entry point: returns server status and WebSocket URL
app.get('/', (request, response) => {
  response.json({
    status: 'ok',
    websocket: `ws://${request.headers.host ?? `localhost:${port}`}`,
  })
})

// Broadcast a message to all clients, optionally excluding one socket ID (= the sender)
function broadcast(payload, excludedSocketId) {
  if (excludedSocketId) {
    io.except(excludedSocketId).emit('server_event', payload)
    return
  }

  io.emit('server_event', payload)
}

// WebSocket connection handler
io.on('connection', (socket) => {
  // Assigns a unique client ID and default name
  const clientId = nextClientId
  nextClientId += 1

  const client = {
    id: clientId,
    name: `User ${clientId}`,
  }

  clients.set(socket, client)
  
  // Notifies the new client of their profile and current online users
  socket.emit('server_event', {
    type: 'welcome',
    client,
    clients: Array.from(clients.values()),
    onlineCount: clients.size,
  })

  console.log(`Client connected: ${client.name} (ID: ${client.id})`)
  
  // Broadcasts presence updates to all other clients
  broadcast(
    {
      type: 'presence',
      action: 'joined',
      client,
      onlineCount: clients.size,
    },
    socket.id,
  )

  // Handles incoming messages
  socket.on('chat_message', (payload) => {
    if (!payload || typeof payload !== 'object' || typeof payload.text !== 'string') {
      socket.emit('server_event', {
        type: 'error',
        message: 'Message text must be a valid string.',
      })
      return
    }

    const text = payload.text.trim()

    if (!text) {
      return
    }

    const message = {
      type: 'chat_message',
      text,
      client,
      sentAt: new Date().toISOString(),
    }

    broadcast(message, socket.id)

    socket.emit('server_event', message)
  })
  
  // Cleans up on disconnect
  socket.on('disconnect', () => {
    clients.delete(socket)

    broadcast({
      type: 'presence',
      action: 'left',
      client,
      onlineCount: clients.size,
    })

    console.log(`Client disconnected: ${client.name} (ID: ${client.id})`)
  })
})

// Start the server
server.listen(port, host, () => {
  console.log(`WebSocket server listening on http://${host}:${port}`)
})

Key Logic

When a user connects, the server:

  1. Assigns a numeric, incremental ID to the client

  2. Notifies the client of its ID and number of connected users

  3. Broadcast the same information to all other clients

On chat message received, the server:

  1. Validates and trims the message

  2. Sets the sentAt field to current time

  3. Broadcasts the message to all clients

On client disconnect, the server:

  1. Removes it from the server

  2. Broadcasts this new info to all connected clients

  3. Simple, logical, and functional. Exactly what I wanted.

3. The Frontend (Vue.js)

I started working on the frontend immediately after having the server up and ready. As I wanted a template to work with, I used the following prompt:

Let's design a basic yet functional layout for a chat application. It needs to have the following sections: a main chat window, the users list (left or right), and a text input with a "Send" button to the bottom of the chat interface. Add a login view to allow users set their names.

The result was amazing, exactly as I imagined the layout to be:

First draft of the webchat layout

The Architecture

This was of course just a static template, and a good one indeed. Now, I needed to connect it to the server and make it fully dynamic.

But before doing that, I wanted to refactor the AI output and start applying some codebase-wide architectural changes, being the first one to move to a features-based app folder structure, for clarity and future-proof scalability.

The main features I identified in my app were three: Chat, Login View and Users List:

  • The Chat would be in charge of orchestrating the whole message sending, receiving and displaying

  • The Login view had to only let users set their name before entering the chat

  • The Users List, located in the app sidebar, would handle the displaying of connected users and their online status

Finally, I created components for layout and ui, respectively containing layout and core components.

Before jumping into writing components and logic, I also wanted to refine the models I would use (pre-generated from the backend work) - here a preview:

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
export type UserStatus = 'online' | 'away' | 'offline'

export type User = {
  id: number;
  name: string;
  status: UserStatus;
}

export type Message = {
  id: string;
  authorId: number;
  authorName: string;
  text: string;
  timestamp: string;
  own?: boolean;
}

export type ServerClient = {
  id: number;
  name: string;
}

// These evolved with time, especially as I added
// the "welcome" event (= login view) when only
// half-way through the project
export type ServerEvent =
  | {
      type: 'welcome';
      client: ServerClient;
      clients: ServerClient[];
      onlineCount: number;
    }
  | {
      type: 'presence';
      action: 'joined' | 'left' | 'updated';
      client: ServerClient;
      onlineCount: number;
    }
  | {
      type: 'chat_message';
      client: ServerClient;
      text: string;
      sentAt: string;
    }
  | {
      type: 'profile';
      client: ServerClient;
    }
  | {
      type: 'error';
      message: string;
    }

Data, Business and Presentation Layers

Following the principles behind the three-layered architecture, I wanted a clean presentation layer, with clear distinction between business logic and components rendering the UI.

To achieve that, I created:

  • useChatSocket.ts composable, in charge of handling all websockets-related logic: connection, disconnection, sending and receiving messages, etc.

  • Pure rendering components - like MessageBlock.vue or MessageList.vue - which would take care of UI rendering

  • A main holder component, Chat.vue, in charge of wiring the data received from the composable with the final rendering components

I also decided to use two main Pinia stores - @/store/users and @/store/messages - to handle respectively users and messages.

This made it possible to have a centralized place to store the data, and access it with ease in horizontal features like Users List, without having to do acrobatic props-drilling gymnastics.

4. Dockerizing the Application

To ensure the app runs the same on my machine as it does on AWS, I created two Dockerfile configurations, one for the server and one for the frontend app.

Dockerfile (server):

docker
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
FROM node:22-alpine

WORKDIR /app

RUN corepack enable

COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile --production=true

COPY server ./server

EXPOSE 3001

ENV PORT=3001
ENV WS_HOST=0.0.0.0

CMD ["yarn", "server"]

Dockerfile (frontend):

docker
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Stage 1: build the Vue.js app
FROM node:22-alpine AS builder

WORKDIR /app

RUN corepack enable

COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile

COPY . .
RUN yarn build

# Stage 2: serve via Nginx
FROM nginx:alpine

COPY --from=builder /app/dist /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf

I also used Docker Compose to spin up both the frontend and backend with a single command: docker-compose up.

As it can be noticed, there is a nginx.conf file copied in the EC2 instance: that's because, under the hood, I used an nginx server to serve the frontend app and Node.js backend:

nginx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
server {
    listen 80;
    root /usr/share/nginx/html;
    index index.html;

    # Proxy WebSocket connections
    location /ws {
        proxy_pass http://server:3001;
        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        # Keep WebSocket connections alive
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }

    # Serve Vue SPA — try files, fallback to index.html for client-side routing
    location / {
        try_files $uri $uri/ /index.html;
    }
}

The nginx static server + reverse proxy configuration works the following way:

  • serves the Vue.js SPA from /usr/share/nginx/html

  • hides the backend from external connections, as it communicates with the Vue app only internally via Docker networking

  • proxies the WebSocket connections at /ws to the backend (port 3001), handling Vue Router SPA with try_files $uri $uri/ /index.html

This provides a single entry point at port 80 (standard HTTP), improving security.

5. Deploying to AWS

For deployment, I chose AWS EC2 (Elastic Computing Cloud), as I wanted to fully dig into the basics of AWS cloud deployments.

To achieve that, I created an AWS account and set-up an EC2 instance, tested it was live and working, and then started writing my deploy.yml GitHub action:

yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
name: Deploy to EC2

on:
  push:
    branches:
      - main

env:
  FORCE_JAVASCRIPT_ACTIONS_TO_NODE24: true

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Deploy to EC2
        uses: appleboy/ssh-action@master
        with:
          host: ${{ secrets.EC2_HOST }}
          username: ${{ secrets.EC2_USER }}
          key: ${{ secrets.EC2_SSH_KEY }}
          script: |
            cd /home/ubuntu/websockets-chat-app
            git pull origin main
            docker-compose down
            docker-compose up -d --build
            docker-compose logs server

The workflow is the following:

  1. When pushing to main the GitHub action is triggered

  2. It does a full project clean build, ensuring there are no errors

  3. Connects to the EC2 instance

  4. Shuts down the running docker app, if any

  5. Runs docker-compose, starting up both the server and the frontend

In order to make this strategy work I had to previously log in to the EC2 instance and checkout the source code into the deployment folder, /home/ubuntu/websockets-chat-app.

Lessons Learned & Final Thoughts

Despite being two years into using AI-aided development tools, I was still surprised by how much they can speed up the development process. To be honest, I don't think I could of a have completed this project alone in just a few hours.

Nonetheless, there was still a fair amount of architectural thinking and senior-level judgement I brought to the table - which reads, the era AI takes over our jobs is still far away from now.

The WebChat I built is in practice a true MVP: it has all the basics but misses more advanced features like in-chat commands (e.g. name change or logout), emoticons support, and a few bugs I've already spotted and plan to fix.

Cloud-wise, even though the deployment strategy is practical and efficient for this use case, I'm still far away from calling myself a DevOps expert - and, to be honest, I'm ok with that. No complaints here.

To conclude, I leave you with links to check the out the code and the live app in action: