← Back to Compiler

Build Docs

A chronological walkthrough of how COMPILEME was built — from empty folder to a fully sandboxed online C++ compiler.

Chapter 1
Project Setup & Folder Structure

The first decision was the tech stack. The goal was a minimal, dependency-light prototype — no databases, no auth, no framework overhead.

The resulting folder layout:

COMPILEME/
├── backend/
│   ├── server.js          ← Express API
│   └── package.json
├── frontend/
│   └── index.html         ← UI (editor + output)
├── docs/
│   └── index.html         ← this page
├── Dockerfile             ← containerises the backend
├── docker-compose.yml     ← mounts Docker socket
└── README.md
No temp files are written to disk on the host. Code travels through stdin directly into the Docker container.

Chapter 2
Backend — Express API

The backend is a single file: backend/server.js. It exposes one endpoint and serves the frontend as static files.

Endpoint

POST /run
Content-Type: application/json

{ "code": "<C++ source>" }

→ 200 OK
{ "output": "<stdout + stderr>" }

Request validation

Child process

Node's built-in child_process.spawn launches docker run. We use spawn (not exec) so we can pipe stdin without shell quoting issues.

const proc = spawn('docker', dockerArgs);
proc.stdin.write(code);   // send C++ source into container
proc.stdin.end();

Both stdout and stderr of the Docker process are collected into a single string and returned as output.

Chapter 3
Docker Sandboxing

Every code submission gets its own brand-new container. There is no shared state.

Execution strategy

Instead of mounting host directories, the source code is piped via stdin:

bash -c 'cat > /tmp/main.cpp \
  && g++ /tmp/main.cpp -o /tmp/main 2>&1 \
  && timeout 2 /tmp/main 2>&1'

cat reads stdin and writes main.cpp. Then g++ compiles it. If compilation succeeds, timeout 2 ./main runs the binary with a hard 2-second wall-clock limit.

Resource limits applied

FlagValueWhat it prevents
--memory=256m256 MBMemory exhaustion / OOM attacks
--memory-swap=256m256 MB (= memory)Disables swap to enforce hard cap
--cpus=11 coreCPU hogging / spinning loops
--network=noneAny inbound or outbound network access
--pids-limit=6464 processesFork bombs
timeout 22 sInfinite loops, sleep() abuse
--rmEnsures container is deleted on exit

Server-side hard timeout

A 15-second setTimeout in Node kills the Docker process with SIGKILL if Docker itself stalls (e.g., image pull hangs, daemon unresponsive). This prevents the HTTP request from hanging indefinitely.

The first run may take 30–60 s while Docker pulls gcc:latest (~1.2 GB). Run docker pull gcc:latest once before starting the server.

Chapter 4
Frontend UI

A single frontend/index.html — no build step, no npm install, no framework. Express serves it as a static file at /.

Key UI elements

fetch call

const res  = await fetch('/run', {
  method:  'POST',
  headers: { 'Content-Type': 'application/json' },
  body:    JSON.stringify({ code })
});
const data = await res.json();
outputEl.textContent = data.output;

Frontend and backend share the same origin (localhost:3300), so no CORS configuration is needed.

Chapter 5
Security Decisions

No shell injection

The C++ code is never interpolated into a shell string. It is written to proc.stdin as raw bytes. This means backticks, semicolons, dollar signs, quotes — none of them can escape the container.

No host filesystem access

No directories are mounted from the host. The container's /tmp is ephemeral and disappears with --rm.

No network from containers

--network=none means user code cannot make HTTP requests, call external APIs, exfiltrate data, or connect back to an attacker.

Input size cap

Payloads over 50 KB are rejected before Docker is ever invoked. Express's json({ limit: '100kb' }) handles the HTTP layer.

Chapter 6
Request Flow — End to End

Browser
clicks Run
POST /run
JSON body
Express
validates input
docker run
stdin = code
gcc container
compile + run
stdout/stderr
collected
JSON response
output field
Output box
displayed
  1. User writes C++ code and clicks Run.
  2. Browser sends POST /run with the code as a JSON string.
  3. Express validates and size-checks the payload.
  4. Node spawns docker run --rm -i gcc:latest bash -c '...'.
  5. The C++ source is piped to cat inside the container → written to /tmp/main.cpp.
  6. g++ compiles the file. Errors go to stderr, redirected to stdout via 2>&1.
  7. If compilation succeeds, timeout 2 ./main executes the binary.
  8. All output (stdout + stderr) is collected by Node.
  9. Container exits and is auto-removed (--rm).
  10. Node returns { "output": "..." } — browser displays it.

Chapter 7
Running Locally

Prerequisites

Option A — run directly on host (simplest)

# 1. Pull the gcc image once (saves time on first run)
docker pull gcc:latest

# 2. Install backend dependencies
cd backend && npm install && cd ..

# 3. Start the server
node backend/server.js

# 4. Open in browser
open http://localhost:3300

Option B — run via Docker Compose

# Build and start the containerised backend
docker-compose up --build

# Open in browser
open http://localhost:3300
The docs page is served at http://localhost:3300/docs.

Environment variable

Override the default port with PORT:

PORT=8080 node backend/server.js