Skip to content

Instantly share code, notes, and snippets.

@kocienda
Created January 16, 2026 21:26
Show Gist options
  • Select an option

  • Save kocienda/2ba9755ad0f672fd0aedda9c4c713e6e to your computer and use it in GitHub Desktop.

Select an option

Save kocienda/2ba9755ad0f672fd0aedda9c4c713e6e to your computer and use it in GitHub Desktop.

Static Site + Blog Infrastructure Plan

Astro + MDX · rsync deploy · Linode · Cloudflare DNS

This document defines a boring, sane, long-lived infrastructure for a content-driven website and blog.

The guiding principle is simple:

Build locally. Inspect everything. Deploy files. Serve static content.

No CI rebuilds. No Docker. No GitHub in the deploy path. No surprises.

Phase 0 — Invariants (non-negotiable) ✅

These constraints govern every decision that follows.

  1. The server is dumb

    • Serves static files only
    • No Node, Bun, Docker, or build tooling on the server
  2. The site is content

    • Source of truth = files in a git repo
    • Every git commit is a potential release
  3. Build happens locally

    • Deterministic
    • Inspectable
    • Repeatable
  4. Deploy is atomic

    • Upload new files
    • Flip one symlink
    • Done
  5. GitHub is not part of deployment

    • Used only for revision control
    • No CI, no webhooks, no apps, no secrets

If any future "improvement" violates these rules, it is rejected.

Phase 1 — Infrastructure bootstrap

Linode + Cloudflare DNS + HTTPS

Goal: https://tugtool.dev serves a static index.html over HTTPS.

1.1 Provision the server

  • Linode VM
  • Ubuntu 24.04 LTS
  • Small instance is sufficient (static files only)

1.2 Cloudflare DNS

Cloudflare is used explicitly to simplify analytics later.

  • Move tugtool.dev DNS to Cloudflare
  • Create:
    • A record → Linode IPv4
    • AAAA record → Linode IPv6 (optional)
  • Leave proxy ("orange cloud") off initially (DNS-only, origin-served)

Note (registrar vs DNS): If the domain is registered at Namecheap (or any registrar), that is separate from DNS hosting. To use Cloudflare DNS, you change the domain's nameservers at the registrar to the Cloudflare-provided nameservers. After that, DNS records are managed in Cloudflare.

Door open for later: If we enable the Cloudflare proxy later, this plan still works. The origin stays static; only caching/CDN behavior changes upstream.

1.3 Install Caddy

Caddy handles HTTPS automatically via Let's Encrypt.

sudo apt update
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' \
  | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' \
  | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install -y caddy

Check with:

sudo systemctl status caddy

1.4 Server directory layout + users & permissions

/srv/tugtool/
  releases/
  current -> /srv/tugtool/releases/<release-id>
  • Caddy serves /srv/tugtool/current

Goal: allow deploys without SSHing as root, while keeping the server "dumb" (static file serving only).

  • Create a shared group: site
  • Create a dedicated Unix user: deploy (member of site)
  • Add caddy to the site group
  • SSH: key-based auth only; multiple keys allowed (deploys can happen from any machine with an authorized key)
  • Filesystem policy:
    • /srv/tugtool owned by deploy:site with group-readable permissions
    • deploy has write access to /srv/tugtool/releases/ and can update the current symlink
    • Caddy has read access via site group membership
    • No build tools on the server; the server never runs node, pnpm, etc.
# Create directory structure
sudo mkdir -p /srv/tugtool/releases

# Create site group and deploy user
sudo groupadd site
sudo useradd -m -g site -s /bin/bash deploy
sudo usermod -aG site caddy

# Set ownership and permissions
sudo chown -R deploy:site /srv/tugtool
sudo chmod -R u=rwX,g=rX,o= /srv/tugtool
sudo chmod 2750 /srv/tugtool /srv/tugtool/releases

1.4.1 Monitoring

Enable Linode's built-in monitoring for basic uptime and resource alerts:

  • CPU, memory, disk usage alerts
  • Network connectivity monitoring
  • Configure email alerts for anomalies

This provides "good enough" monitoring without operational complexity.

1.4.2 SSH access for deploy user

Copy your public key to the deploy user. From your local machine:

# Option A: If you can still SSH as root
ssh-copy-id -i ~/.ssh/id_ed25519.pub deploy@tugtool.dev

# Option B: Manually (run on server as root)
sudo -u deploy mkdir -p /home/deploy/.ssh
sudo -u deploy chmod 700 /home/deploy/.ssh
echo "YOUR_PUBLIC_KEY_HERE" | sudo -u deploy tee /home/deploy/.ssh/authorized_keys
sudo -u deploy chmod 600 /home/deploy/.ssh/authorized_keys

Test SSH login as deploy (from local machine):

ssh deploy@tugtool.dev

1.4.3 Sudo privileges for deploy

Give deploy passwordless sudo (if you're logged in as deploy, you can use the box):

echo "deploy ALL=(ALL) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/deploy
sudo chmod 440 /etc/sudoers.d/deploy

Test:

sudo whoami
# Should return: root (no password prompt)

Note: The deploy script won't need sudo — it runs as deploy which owns /srv/tugtool. Sudo is for interactive maintenance tasks.

1.4.4 SSH hardening

Edit SSH config to disable password auth and root login:

sudo tee /etc/ssh/sshd_config.d/hardening.conf << 'EOF'
PermitRootLogin no
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
EOF

Validate and restart:

Warning: Before doing this, confirm you can SSH as deploy with your key. If you lock yourself out, you'll need Linode console access (Lish) to recover.

sudo sshd -t && sudo systemctl restart ssh

1.4.5 Firewall (UFW)

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow http
sudo ufw allow https
sudo ufw enable

Verify:

sudo ufw status
# Should show: 22, 80, 443 allowed

1.5 First "hello world"

Create test content (as deploy user):

sudo -u deploy mkdir -p /srv/tugtool/releases/hello
echo "hello world" | sudo -u deploy tee /srv/tugtool/releases/hello/index.html
sudo -u deploy ln -sfn /srv/tugtool/releases/hello /srv/tugtool/current

Write the complete Caddyfile:

sudo tee /etc/caddy/Caddyfile << 'EOF'
tugtool.dev {
  respond /health "ok\n" 200
  root * /srv/tugtool/current
  handle_errors {
    @notfound expression {http.error.status_code} == 404
    rewrite @notfound /404/index.html
    file_server
  }
  file_server
}

www.tugtool.dev {
  redir https://tugtool.dev{uri} permanent
}
EOF

Format and reload:

sudo caddy fmt --overwrite /etc/caddy/Caddyfile
sudo systemctl reload caddy

Verify:

curl https://tugtool.dev
# Should return: hello world

Deliverable: https://tugtool.dev serves a static page over HTTPS.

Phase 2 — Release identity & atomic switching

Goal: Every deployed version is identified by a 7-character git commit hash.

2.1 Release naming

Release ID:

git rev-parse --short=7 HEAD

Server layout:

/srv/tugtool/releases/
  a7c91e2/
  f13b09d/
current -> /srv/tugtool/releases/f13b09d

This is:

  • Unique
  • Human-traceable
  • Content-accurate
  • Naturally ordered by git history

2.2 VERSION marker

Each release contains a file: VERSION

Contents:

commit a7c91e2

2.3 Atomic switch

Cutover is a single operation:

ln -sfn /srv/tugtool/releases/<commit> /srv/tugtool/current
  • Instant
  • Atomic
  • Reversible

Deliverable: Any commit can be made live or rolled back instantly.

Phase 3 — One-liner deployment

Goal: Deploy the currently checked-out commit with one command.

3.1 Deploy contract

  • Input: current git commit
  • Output: site live on tugtool.dev
  • No SSH shells
  • No manual steps

3.2 Deploy script responsibilities

The script is written in Python 3.12+ (bash is inadequate for this complexity).

The script must:

  1. Assert clean working tree (hard failure — a dirty tree means the commit hash doesn't represent what you're deploying)
  2. Compute commit hash
  3. Run local build
  4. Validate content invariants locally (fail fast with excellent error messages):
    • Slugs must be URL-safe and well-formed
    • Slugs must be globally unique (single flat namespace)
  5. Compute content manifest hash (SHA256 of all file paths + contents) for verification
  6. rsync build output to /srv/tugtool/releases/<commit>/ (idempotent — re-running deploys the same commit safely)
  7. Verify remote content matches local manifest hash
  8. Flip current symlink via SSH
  9. Prune old releases, keeping last 3 (easy to redeploy any old commit from git)

Re-deploy behavior: If a deploy fails mid-transfer or needs retry, running ./deploy again for the same commit will rsync the correct files over any partial state and verify integrity. No manual cleanup needed.

Implementation notes:

  • Manifest algorithm must be deterministic:
    • Include relative file path + file contents for every file in dist/
    • Sort file paths bytewise before hashing
  • Remote verification must use only baseline tooling on Ubuntu:
    • Prefer running a short python3 snippet over SSH to compute the same manifest on the remote directory.
    • No server-side Node. No server-side build.

Python dependencies (keep minimal, pinned):

  • PyYAML for frontmatter parsing/validation (slug checks, required fields, etc.)

3.3 Invocation

./deploy

No arguments. No modes.

Deliverable: A single, reliable deploy command.

3.4 Implementation Plan

Files to Create

  1. ./deploy — executable Python script (chmod +x, shebang line)
  2. requirements.txt — PyYAML dependency (pinned)

Configuration (hardcoded constants)

REMOTE_HOST = "tugtool.dev"
REMOTE_USER = "deploy"
REMOTE_BASE = "/srv/tugtool"
RELEASES_DIR = f"{REMOTE_BASE}/releases"
CURRENT_LINK = f"{REMOTE_BASE}/current"
KEEP_RELEASES = 3
LOCAL_DIST = Path("dist")
CONTENT_DIR = Path("src/content/journal")

Deployment Steps (in order)

  1. Assert clean working treegit status --porcelain, hard fail if dirty
  2. Get commit hashgit rev-parse --short=7 HEAD
  3. Run buildpnpm astro build
  4. Validate content — parse MDX frontmatter, check slugs
  5. Compute manifest hash — SHA256 of sorted file paths + contents in dist/
  6. rsync to serverrsync -avz --delete --checksum dist/ deploy@tugtool.dev:/srv/tugtool/releases/{commit}/
  7. Verify remote hash — run python3 snippet over SSH, compare hashes
  8. Flip symlinkln -sfn /srv/tugtool/releases/{commit} /srv/tugtool/current
  9. Prune old releases — keep last 3, remove older
  10. Write VERSION filecommit {hash} in release directory

Key Algorithms

Slug Validation:

SLUG_PATTERN = re.compile(r'^[a-z0-9]+(?:-[a-z0-9]+)*$')
  • Lowercase alphanumeric with hyphens only
  • No leading/trailing hyphens, no consecutive hyphens

Manifest Hash (must be identical local and remote):

def compute_manifest_hash(dist_dir: Path) -> str:
    hasher = hashlib.sha256()
    files = sorted(
        (p.relative_to(dist_dir) for p in dist_dir.rglob("*") if p.is_file()),
        key=lambda p: str(p).encode('utf-8')
    )
    for rel_path in files:
        hasher.update(str(rel_path).encode('utf-8'))
        hasher.update(b'\n')
        hasher.update((dist_dir / rel_path).read_bytes())
        hasher.update(b'\n')
    return hasher.hexdigest()

Remote Verification:

  • SSH into server, run identical Python hash algorithm
  • Compare hex digests, fail if mismatch

Function Structure

main()
├── assert_clean_working_tree()
├── get_commit_hash() -> str
├── run_build()
├── validate_content() -> int
│   ├── extract_frontmatter(content) -> dict
│   └── validate_slug(slug) -> bool
├── compute_manifest_hash() -> str
├── rsync_to_server(commit) -> int
├── verify_remote_hash(commit, expected_hash)
├── flip_symlink(commit)
├── prune_old_releases() -> list[str]
└── write_version_file(commit)

Output Format

Deploying commit a7c91e2...

[1/9] Checking working tree... ok
[2/9] Running build... ok (3.2s)
[3/9] Validating content... ok (4 posts)
[4/9] Computing manifest hash... ok (sha256:e3b0c442...)
[5/9] Uploading to server... ok (rsync: 42 files)
[6/9] Verifying remote content... ok
[7/9] Switching to new release... ok
[8/9] Pruning old releases... ok (removed: b2c91e3)
[9/9] Writing VERSION file... ok

Deployed! https://tugtool.dev is now serving commit a7c91e2

Error Handling

  • Custom DeployError exception with descriptive messages
  • Fail fast on any error
  • Clear messages for: dirty tree, build failure, invalid slug, duplicate slug, rsync failure, hash mismatch

Dependencies

Files to create:

  • requirements.txt with pyyaml==6.0.1 (pinned)

Usage:

pip install -r requirements.txt

Script imports: subprocess, hashlib, pathlib, re (all stdlib) + yaml (PyYAML)

Edge Cases Handled

  • Empty content directory (valid, deploy proceeds)
  • Re-deploy same commit (idempotent)
  • Partial previous deploy (rsync repairs)
  • No SSH key configured (clear error from ssh)
  • dist/ doesn't exist (check before hash, clear error)

Verification

Note: Full verification requires Phase 4 (Astro setup) to be complete. The build step needs pnpm astro build to work.

What can be verified now (Phase 3):

  1. Test dirty tree rejection

    echo "test" > test.txt
    ./deploy  # Should fail with "Working tree is dirty"
    rm test.txt
  2. Test clean tree detection

    ./deploy  # Should pass step 1, fail at step 2 (build) with "No package.json"

Verify after Phase 4 is complete:

  1. Full deploy test

    ./deploy
    curl https://tugtool.dev
  2. Test idempotency

    ./deploy  # Run twice, should succeed both times
  3. Verify VERSION file

    ssh deploy@tugtool.dev cat /srv/tugtool/current/VERSION
  4. Verify manifest hash

    • Deploy, note hash
    • SSH to server, run hash algorithm manually
    • Compare results

Phase 4 Implementation Plan — Astro Baseline

Overview

Set up Astro as a static site generator with Tailwind CSS v4, React (for icons only), and all required integrations.

Files to Create

  1. .nvmrc — Node version pin
  2. package.json — Dependencies and scripts
  3. astro.config.mjs — Astro configuration
  4. src/styles/global.css — Global styles with Tailwind + font definitions
  5. src/layouts/BaseLayout.astro — Base HTML layout
  6. src/pages/index.astro — Homepage
  7. src/pages/404.astro — 404 error page
  8. public/robots.txt — Allow all crawlers
  9. public/fonts/ — Self-hosted font files (user provides)

Step-by-Step Implementation

Step 4.1: Create .nvmrc

22.11.0

Step 4.2: Initialize pnpm and install dependencies

pnpm init
pnpm add astro @astrojs/mdx @astrojs/rss @astrojs/sitemap @astrojs/react react react-dom
pnpm add tailwindcss @tailwindcss/vite
pnpm add remark-gfm lucide-react clsx

Step 4.3: Create package.json scripts

Ensure package.json has:

{
  "name": "tugtool-site",
  "type": "module",
  "scripts": {
    "dev": "astro dev",
    "build": "astro build",
    "preview": "astro preview"
  }
}

Step 4.4: Create astro.config.mjs

import { defineConfig } from 'astro/config';
import mdx from '@astrojs/mdx';
import sitemap from '@astrojs/sitemap';
import react from '@astrojs/react';
import tailwindcss from '@tailwindcss/vite';
import remarkGfm from 'remark-gfm';

export default defineConfig({
  site: 'https://tugtool.dev',
  output: 'static',
  integrations: [
    mdx(),
    sitemap(),
    react(),
  ],
  markdown: {
    remarkPlugins: [remarkGfm],
  },
  vite: {
    plugins: [tailwindcss()],
  },
});

Step 4.5: Create public/fonts/ directory and add font files

mkdir -p public/fonts

Font files:

  • Download Inter from fonts.google.com (provides TTF variable fonts)
  • Typography.com fonts: deferred

Converting TTF to WOFF2 (recommended for smaller file size):

Install Google's official woff2 tool:

brew install woff2

Convert:

cd public/fonts
woff2_compress Inter-VariableFont_opsz,wght.ttf
woff2_compress Inter-Italic-VariableFont_opsz,wght.ttf

This creates .woff2 files (~350KB) alongside the TTFs (~875KB). Delete the TTFs after conversion if desired.

Alternative: Use TTF files directly. They work in all modern browsers; the only downside is larger file size.

Expected files after conversion:

public/fonts/
  Inter-VariableFont_opsz,wght.woff2
  Inter-Italic-VariableFont_opsz,wght.woff2

Step 4.6: Create src/styles/global.css

Create global styles with:

  • Tailwind CSS import
  • @custom-variant dark for dark mode support
  • @font-face definitions for self-hosted fonts (reference files in /fonts/)
  • @theme inline block with CSS custom properties for colors, radii, etc.
  • :root and .dark blocks with light/dark color schemes
  • @layer base with default styles for body, links, borders
  • .prose styles for MDX content (code blocks, headings, blockquotes)

The specific color palette, theme variables, and prose styles are left as an exercise — adapt from an existing design system or create your own.

Step 4.7: Create src/layouts/BaseLayout.astro

Create the base HTML layout with:

  • Imports: site config, global.css
  • Props interface: title, description, ogImage
  • Head: charset, viewport, color-scheme meta, title, description, canonical URL, favicon, OG/Twitter meta tags, RSS link
  • Theme detection script (check localStorage and prefers-color-scheme, apply .dark class)
  • Body: gradient background overlay, sticky header with logo and nav, main content slot, footer
  • Header: logo (with dark/light variants), nav links with current-path highlighting, theme toggle button
  • Footer: copyright, RSS link, GitHub link
  • Theme toggle script (toggle .dark class and persist to localStorage)

Content collection queries (for nav drawer, etc.) should be commented out until Phase 5.

Step 4.8: Create src/pages/index.astro

Create the homepage with:

  • Import BaseLayout and site config
  • Hero section with headline, description paragraphs, CTA buttons
  • Grid layout with text on left, hero image on right
  • "Latest entries" section (commented out until Phase 5 content collections are configured)

Step 4.9: Create src/pages/404.astro

Create a simple 404 page with:

  • Import BaseLayout
  • Heading, message, and links back to home and journal

Step 4.10: Create public/robots.txt

User-agent: *
Allow: /

Sitemap: https://tugtool.dev/sitemap-index.xml

Step 4.11: Create directory structure

mkdir -p src/{pages,layouts,styles,lib,content/journal} public/{fonts,og}

Copy public assets (favicon, logos, OG images) from existing design or create new ones.

Step 4.12: Update .gitignore

Ensure .gitignore includes entries for:

  • Astro build output (dist/, .astro/)
  • Node dependencies (node_modules/)
  • Font files if licensed (public/fonts/*.woff, public/fonts/*.woff2, public/fonts/*.ttf)

Note: Inter is OFL-licensed and could be committed, but typography.com fonts are proprietary and should not be committed to public repos.

Verification

After implementation:

  1. Install Node (if needed)

    nvm install
    nvm use
  2. Install dependencies

    pnpm install
  3. Run dev server

    pnpm dev

    Visit http://localhost:4321 — should see "tugtool.dev" homepage

  4. Test build

    pnpm build

    Should create dist/ with static files

  5. Test preview

    pnpm preview

    Visit http://localhost:4321 — should match dev

  6. Verify 404 page Visit http://localhost:4321/nonexistent — should show 404 page

  7. Full deploy test

    ./deploy
    curl https://tugtool.dev

    Should deploy and serve the new homepage

  8. Verify sitemap generated

    ls dist/sitemap*.xml
    curl https://tugtool.dev/sitemap-index.xml
  9. Verify fonts loaded (if font files provided)

    • Open browser dev tools → Network tab
    • Reload page
    • Confirm .woff2 files loaded from /fonts/

Notes

  • Font files must be provided by user (Google Fonts download + typography.com web kit)
  • If fonts aren't available yet, the site will fall back to system fonts gracefully
  • Content collections (Phase 5) not set up yet — journal/ directory created but empty
  • RSS feed will work once content exists
  • Typography.com fonts are purchased/licensed. Do not commit font files to a public repository. If the repo is public, add public/fonts/*.woff* to .gitignore and document the required fonts in the README so collaborators know what to obtain.

Benefits:

  • No external requests at page load
  • No GDPR/privacy concerns (no Google Fonts CDN)
  • Fonts versioned with the site
  • Works offline
  • Faster first contentful paint

Phase 5 Implementation Plan — Blog System

Overview

Bring the full blog system online with content collections, MDX support, RSS, and all markdown niceties.

Dependencies to Install

pnpm add remark-smartypants rehype-slug rehype-autolink-headings remark-toc

Files to Create/Modify

5.1. Update astro.config.mjs

Add all remark/rehype plugins with proper ordering:

import remarkSmartypants from 'remark-smartypants';
import remarkToc from 'remark-toc';
import rehypeSlug from 'rehype-slug';
import rehypeAutolinkHeadings from 'rehype-autolink-headings';

// In markdown config:
markdown: {
  remarkPlugins: [
    remarkGfm,
    remarkSmartypants,
    [remarkToc, { heading: 'contents|table of contents|toc', maxDepth: 3, tight: true }],
  ],
  rehypePlugins: [
    rehypeSlug,
    [rehypeAutolinkHeadings, { behavior: 'prepend', properties: { className: ['anchor-link'], ariaHidden: true, tabIndex: -1 } }],
  ],
  shikiConfig: {
    themes: { light: 'github-light', dark: 'github-dark' },
  },
},

5.2. Create src/content/config.ts

Define the journal collection schema:

import { defineCollection, z } from 'astro:content';

const journal = defineCollection({
  type: 'content',
  schema: z.object({
    title: z.string(),
    date: z.coerce.date(),
    slug: z.string(),
    tags: z.array(z.string()).default([]),
    description: z.string().max(200).optional(),
  }),
});

export const collections = { journal };

Notes:

  • slug is required in frontmatter. The deploy script validates slug uniqueness.
  • date supports multiple formats via z.coerce.date():
    • ISO datetime: 2025-01-16T12:00:00
    • Unix date output: Fri Jan 16 10:54:10 PST 2026
    • Use datetime (not just date) to ensure unambiguous ordering when multiple posts are published on the same day.

5.3. Create src/pages/journal/index.astro

Journal index page showing all posts:

  • Import getCollection from astro:content
  • Query all journal posts, sort by date descending
  • Render as card list with title, description, date
  • Use BaseLayout

Reference: Old implementation at /Users/kocienda/Mounts/u/src/tugtool.dev/src/pages/journal/index.astro

5.4. Create src/pages/journal/[slug].astro

Individual post pages with dynamic routing:

  • Use getStaticPaths() to generate routes from frontmatter slug field
  • Query post by slug, render MDX content
  • Wrap in prose styling
  • Include back link to journal index

Reference: Old implementation at /Users/kocienda/Mounts/u/src/tugtool.dev/src/pages/journal/[slug].astro

5.5. Create src/pages/rss.xml.ts

RSS feed generation:

  • Use @astrojs/rss (already installed)
  • Query all journal posts
  • Map to RSS items with title, pubDate, description, link
  • Use SITE config for feed metadata

Reference: Old implementation at /Users/kocienda/Mounts/u/src/tugtool.dev/src/pages/rss.xml.ts

5.6. Update src/pages/index.astro

Uncomment the "Latest entries" section:

  • Uncomment content collection imports
  • Uncomment the latest query
  • Uncomment the "Latest entries" section HTML

5.7. Update src/layouts/BaseLayout.astro

Uncomment the latestForDrawer query (for future mobile drawer support).

5.8. Create src/components/BlogImage.astro (Required)

Image wrapper component for MDX posts:

  • Wraps astro:assets Image component
  • Enforces required alt attribute
  • Sets sensible sizes default
  • Handles content-hashed output

Per site-plan.md (hard rule): "No raw <img> tags in MDX posts. All post images must be imported and rendered via the shared wrapper component."

5.9. Add anchor link styling to global.css

Style the heading anchor links:

.anchor-link {
  @apply text-muted-foreground opacity-0 transition-opacity;
}
h2:hover .anchor-link,
h3:hover .anchor-link,
h4:hover .anchor-link {
  @apply opacity-100;
}

5.10. Create a test post

Create src/content/journal/hello-world/index.mdx:

---
title: "Hello World"
date: 2025-01-16T12:00:00
slug: hello-world
tags: []
description: "First post to test the blog system."
---

# Hello World

This is a test post.

## Contents

## Section One

Some content here with "smart quotes" and -- dashes.

## Section Two

More content.

File Summary

File Action
package.json Add 4 dependencies
astro.config.mjs Add remark/rehype plugins, shiki config
src/content/config.ts Create (new)
src/pages/journal/index.astro Create (new)
src/pages/journal/[slug].astro Create (new)
src/pages/rss.xml.ts Create (new)
src/pages/index.astro Uncomment Phase 5 code
src/layouts/BaseLayout.astro Uncomment Phase 5 code
src/styles/global.css Add anchor link styles
src/components/BlogImage.astro Create (required)
src/content/journal/hello-world/index.mdx Create test post

Verification

  1. Install dependencies

    pnpm install
  2. Run dev server

    pnpm dev
  3. Test markdown niceties

    • Smart quotes: "test" should render as "test"
    • Dashes: -- should render as —
    • TOC: ## Contents should be replaced with linked list
    • Heading anchors: hover over h2/h3 to see anchor links
    • Code blocks: should have syntax highlighting
  4. Test RSS

  5. Test build

    pnpm build
    • Should complete without errors
    • Check dist/journal/hello-world/index.html exists
  6. Test deploy

    ./deploy

Notes

  • No draft feature per site-plan.md — preview locally before committing
  • Slug is authoritative (from frontmatter, not filename) — deploy script validates
  • Tag pages not implemented in this phase (can be added later)
  • MobileJournalDrawer deferred (requires React component from old site)

Phase 6 — Cloudflare analytics (optional, last)

Goal: Add analytics without touching infrastructure.

6.1 What changes

  • Add one Cloudflare Web Analytics snippet to the site layout

6.2 What does not change

  • No server configuration
  • No deploy process changes
  • No runtime dependencies

Cloudflare observes traffic at the DNS/CDN layer.

Deliverable: Analytics with zero operational complexity.

Final State

  • One git repo
  • One Linode VM
  • One static directory
  • One deploy command
  • Infinite releases via commit hashes
  • Zero CI rebuilds
  • Zero deployment drama
  • Optional Cloudflare proxy/CDN later (no origin redesign)

This is as close as the modern web gets to:

"copy files to a server and be done"

while still delivering TLS, SEO, images, RSS, and analytics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment