Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Select an option

  • Save rohittiwari-dev/ab8b478768f9b76a8637c0e945d5873c to your computer and use it in GitHub Desktop.

Select an option

Save rohittiwari-dev/ab8b478768f9b76a8637c0e945d5873c to your computer and use it in GitHub Desktop.
Complete Redis & DragonDB Deployment Guide

๐Ÿš€ Complete Redis & DragonDB Deployment Guide

A Visual Journey to High-Performance Database Solutions

    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  ๐ŸŽฏ Choose Your Adventure           โ”‚
    โ”‚                                     โ”‚
    โ”‚  ๐Ÿ”ด Single Instance (Simple)       โ”‚
    โ”‚  ๐ŸŸก Redis Cluster (Scalable)       โ”‚
    โ”‚  ๐ŸŸข DragonDB Alternative           โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Redis Performance Latency Cost


๐Ÿ“š Table of Contents

๐ŸŽฏ Quick Start Paths

๐Ÿ“– Complete Guide

  1. ๐ŸŒŸ Overview & Comparison
  2. ๐Ÿ—๏ธ Architecture Patterns
  3. ๐Ÿ“‹ Prerequisites
  4. โ˜๏ธ EC2 Infrastructure
  5. โš™๏ธ Installation Methods
  6. ๐Ÿš€ Performance Tuning
  7. ๐ŸŽจ Management UI Setup
  8. ๐Ÿ’ป Node.js Integration
  9. ๐Ÿ“Š Monitoring & Alerts
  10. ๐Ÿ”’ Security Hardening
  11. ๐Ÿ› ๏ธ Troubleshooting
  12. โœ… Production Checklist

๐ŸŒŸ Overview & Comparison

๐ŸŽฏ What This Guide Covers

This comprehensive guide provides three distinct deployment paths for high-performance in-memory databases:

graph TD
    A[Choose Your Path] --> B[๐Ÿ”ด Single Instance]
    A --> C[๐ŸŸก Redis Cluster]
    A --> D[๐ŸŸข DragonDB]

    B --> B1[Perfect for: Development<br/>Small to Medium Apps<br/>< 10K ops/sec]
    C --> C1[Perfect for: Production<br/>High Throughput<br/>200K+ ops/sec]
    D --> D1[Perfect for: Modern Stack<br/>Rust Performance<br/>Redis Compatible]

    style A fill:#e1f5fe
    style B fill:#ffebee
    style C fill:#fff3e0
    style D fill:#e8f5e8
Loading

๐Ÿ“Š Quick Comparison Table

Feature ๐Ÿ”ด Single Instance ๐ŸŸก Redis Cluster ๐ŸŸข DragonDB
Complexity ๐ŸŸข Simple ๐ŸŸก Moderate ๐ŸŸข Simple
Performance 50K ops/sec 200K+ ops/sec 100K+ ops/sec
Memory 32GB max Unlimited 64GB+
High Availability โŒ No โœ… Yes โœ… Yes
Cost ๐Ÿ’ฐ Low ๐Ÿ’ฐ๐Ÿ’ฐ Medium ๐Ÿ’ฐ Low
Setup Time 15 mins 2 hours 30 mins
Production Ready For small apps โœ… Enterprise โœ… Modern

๐ŸŽช Feature Showcase

๐Ÿš€ Performance Highlights
โšก Sub-millisecond latency
๐Ÿ”ฅ 200,000+ operations/second
๐Ÿ’พ Multi-module support
๐Ÿ”„ Real-time replication

๐Ÿ”ด Single Instance Setup

Perfect for: Development, small applications, learning Redis

๐ŸŽฏ When to Choose Single Instance

โœ… Good for:                    โŒ Avoid for:
โ€ข Development environments      โ€ข Production with >10K users
โ€ข Small applications           โ€ข High availability requirements
โ€ข Learning and testing         โ€ข Data larger than 32GB
โ€ข Cost-sensitive projects      โ€ข Critical business applications

๐Ÿ—๏ธ Single Instance Architecture

    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚     ๐ŸŒ Your App         โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                โ”‚
                โ–ผ
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚   ๐Ÿ“ก Load Balancer      โ”‚
    โ”‚     (Optional)          โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                โ”‚
                โ–ผ
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  ๐Ÿ”ด Redis Instance      โ”‚
    โ”‚  โ€ข All Redis Modules   โ”‚
    โ”‚  โ€ข 32GB Memory         โ”‚
    โ”‚  โ€ข Auto-persistence    โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โš™๏ธ Quick Single Instance Installation

#!/bin/bash
# ๐Ÿš€ One-click Redis installation script

echo "๐Ÿ”ด Starting Redis Single Instance Setup..."

# System optimization
sudo sysctl -w vm.overcommit_memory=1
sudo sysctl -w net.core.somaxconn=65535

# Install Redis Stack
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
sudo yum install -y https://download.redis.io/redis-stack/redis-stack-server-7.2.0-v9.rhel7.x86_64.rpm

# Create optimized configuration
sudo tee /etc/redis/redis.conf << 'EOF'
# ๐Ÿ”ด Single Instance Configuration
port 6379
bind 0.0.0.0
protected-mode yes
requirepass your-strong-password

# Memory settings
maxmemory 28gb
maxmemory-policy allkeys-lru

# Persistence
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec

# Performance
tcp-keepalive 300
timeout 0
tcp-backlog 511

# Redis Stack modules
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisjson.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/redisbloom.so
EOF

# Start Redis
sudo systemctl enable redis-stack-server
sudo systemctl start redis-stack-server

echo "โœ… Redis Single Instance is ready!"
echo "๐ŸŒ Connect at: localhost:6379"
echo "๐Ÿ”‘ Password: your-strong-password"

๐Ÿ’ป Node.js Single Instance Client

// ๐Ÿ”ด Single Instance Redis Client
import { createClient } from "redis";

export class SingleRedisClient {
	private client;

	constructor() {
		this.client = createClient({
			url: "redis://:your-strong-password@localhost:6379",
			socket: {
				reconnectStrategy: (retries) => Math.min(retries * 50, 1000),
			},
		});

		this.client.on("error", (err) =>
			console.log("โŒ Redis Client Error", err),
		);
		this.client.on("connect", () => console.log("โœ… Redis Connected"));
	}

	async connect() {
		await this.client.connect();
	}

	// ๐ŸŽฏ Simple operations
	async set(key: string, value: any, ttl?: number): Promise<void> {
		const stringValue =
			typeof value === "object" ? JSON.stringify(value) : value;
		if (ttl) {
			await this.client.setEx(key, ttl, stringValue);
		} else {
			await this.client.set(key, stringValue);
		}
	}

	async get(key: string): Promise<any> {
		const value = await this.client.get(key);
		try {
			return JSON.parse(value || "");
		} catch {
			return value;
		}
	}

	// ๐ŸŽจ JSON operations
	async setJSON(key: string, data: object): Promise<void> {
		await this.client.json.set(key, "$", data);
	}

	async getJSON(key: string): Promise<any> {
		return await this.client.json.get(key);
	}

	// ๐Ÿ” Search operations
	async search(index: string, query: string): Promise<any> {
		return await this.client.ft.search(index, query);
	}

	async disconnect() {
		await this.client.disconnect();
	}
}

// ๐Ÿš€ Usage Example
export const redis = new SingleRedisClient();

๐ŸŸก Redis Cluster Setup

Perfect for: Production applications, high throughput, enterprise scale

๐ŸŽฏ When to Choose Redis Cluster

โœ… Perfect for:                 โš ๏ธ Consider complexity:
โ€ข Production applications       โ€ข Requires DevOps knowledge
โ€ข >50K operations/second       โ€ข Higher operational overhead
โ€ข Multi-gigabyte datasets     โ€ข Cross-slot operations limited
โ€ข High availability needs     โ€ข More monitoring required
โ€ข Enterprise applications     โ€ข Network configuration

๐Ÿ—๏ธ Redis Cluster Architecture

graph TB
    subgraph "๐ŸŒ Application Layer"
        APP[Your Node.js App]
        LB[Load Balancer]
    end

    subgraph "๐Ÿ”„ Redis Cluster Layer"
        subgraph "Master Nodes"
            M1[๐Ÿ”ด Master 1<br/>Port: 7000<br/>Slots: 0-5460]
            M2[๐Ÿ”ด Master 2<br/>Port: 7001<br/>Slots: 5461-10922]
            M3[๐Ÿ”ด Master 3<br/>Port: 7002<br/>Slots: 10923-16383]
        end

        subgraph "Replica Nodes"
            R1[๐Ÿ”ต Replica 1<br/>Port: 7003<br/>โ†’ Master 1]
            R2[๐Ÿ”ต Replica 2<br/>Port: 7004<br/>โ†’ Master 2]
            R3[๐Ÿ”ต Replica 3<br/>Port: 7005<br/>โ†’ Master 3]
        end
    end

    APP --> LB
    LB --> M1
    LB --> M2
    LB --> M3
    M1 -.-> R1
    M2 -.-> R2
    M3 -.-> R3

    style M1 fill:#ffcdd2
    style M2 fill:#ffcdd2
    style M3 fill:#ffcdd2
    style R1 fill:#e3f2fd
    style R2 fill:#e3f2fd
    style R3 fill:#e3f2fd
Loading

๐Ÿ”ข Data Distribution Example

๐Ÿ“Š How keys are distributed:

Key: "user:123"     โ†’ Hash: 5984  โ†’ Node: Master 2 (7001)
Key: "session:abc"  โ†’ Hash: 12000 โ†’ Node: Master 3 (7002)
Key: "cache:xyz"    โ†’ Hash: 3000  โ†’ Node: Master 1 (7000)

Each master handles ~5,461 hash slots (16,384 total รท 3 masters)

๐Ÿ’ช Failover Scenario

sequenceDiagram
    participant App as ๐Ÿ“ฑ Application
    participant M1 as ๐Ÿ”ด Master 1
    participant R1 as ๐Ÿ”ต Replica 1
    participant Cluster as ๐Ÿ”„ Cluster

    App->>M1: Write Request
    Note over M1: โŒ Master 1 Crashes

    Cluster->>Cluster: ๐Ÿšจ Detect Failure (3 sec)
    Cluster->>R1: ๐ŸŽฏ Promote to Master
    R1->>Cluster: โœ… Ready as Master

    App->>R1: Write Request (now master)
    Note over R1: ๐ŸŸข Service Restored
Loading

๐ŸŸข DragonDB Alternative

Perfect for: Modern applications, Rust performance, Redis compatibility

๐Ÿ‰ What is DragonDB?

DragonDB is a modern, Rust-based in-memory database that provides Redis compatibility with enhanced performance and memory efficiency.

graph LR
    subgraph "๐Ÿ”ด Traditional Redis"
        R1[C Language]
        R2[Single-threaded]
        R3[GIL Limitations]
    end

    subgraph "๐ŸŸข DragonDB"
        D1[Rust Language]
        D2[Multi-threaded]
        D3[Memory Safe]
    end

    R1 --> D1
    R2 --> D2
    R3 --> D3

    style D1 fill:#e8f5e8
    style D2 fill:#e8f5e8
    style D3 fill:#e8f5e8
Loading

๐ŸŽฏ When to Choose DragonDB

โœ… Perfect for:                 โš ๏ธ Consider:
โ€ข Modern applications          โ€ข Newer technology (less mature)
โ€ข High memory efficiency       โ€ข Smaller community
โ€ข Multi-threaded workloads     โ€ข Limited third-party tools
โ€ข Rust ecosystem integration   โ€ข Documentation still growing
โ€ข Enhanced security needs      โ€ข Fewer Redis modules

๐Ÿ“Š DragonDB vs Redis Performance

Metric ๐Ÿ”ด Redis ๐ŸŸข DragonDB ๐ŸŽฏ Improvement
Memory Usage 100% 60% 40% less memory
CPU Efficiency 100% 150% 50% better CPU
Throughput 100K ops/sec 120K ops/sec 20% faster
Startup Time 5 seconds 2 seconds 60% faster

๐Ÿš€ Quick DragonDB Installation

#!/bin/bash
# ๐ŸŸข DragonDB Installation Script

echo "๐Ÿ‰ Starting DragonDB Installation..."

# Install Rust (required for DragonDB)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env

# Download and install DragonDB
wget https://github.com/dragonflydb/dragonfly/releases/latest/download/dragonfly-x86_64.tar.gz
tar -xzf dragonfly-x86_64.tar.gz
sudo mv dragonfly /usr/local/bin/

# Create DragonDB configuration
sudo mkdir -p /etc/dragondb
sudo tee /etc/dragondb/dragonfly.conf << 'EOF'
# ๐ŸŸข DragonDB Configuration
port 6379
bind 0.0.0.0
requirepass your-strong-password

# Memory settings
maxmemory 28gb
maxmemory_policy allkeys_lru

# Performance
tcp_keepalive 300
timeout 0

# Logging
logfile /var/log/dragondb/dragonfly.log
loglevel notice

# Multi-threading (DragonDB advantage)
proactor_threads 8
EOF

# Create systemd service
sudo tee /etc/systemd/system/dragondb.service << 'EOF'
[Unit]
Description=DragonDB Server
After=network.target

[Service]
Type=simple
User=redis
ExecStart=/usr/local/bin/dragonfly --flagfile=/etc/dragondb/dragonfly.conf
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
EOF

# Start DragonDB
sudo systemctl enable dragondb
sudo systemctl start dragondb

echo "โœ… DragonDB is ready!"
echo "๐ŸŒ Connect at: localhost:6379"
echo "๐Ÿ”‘ Password: your-strong-password"
echo "๐Ÿ‰ Enjoy the performance boost!"

๐Ÿ’ป Node.js DragonDB Client

// ๐ŸŸข DragonDB Client (Redis-compatible)
import { createClient } from "redis";

export class DragonDBClient {
	private client;

	constructor() {
		this.client = createClient({
			url: "redis://:your-strong-password@localhost:6379",
			socket: {
				reconnectStrategy: (retries) => Math.min(retries * 50, 1000),
			},
		});

		this.client.on("error", (err) => console.log("โŒ DragonDB Error", err));
		this.client.on("connect", () => console.log("๐Ÿ‰ DragonDB Connected"));
	}

	// All Redis commands work the same!
	async set(key: string, value: any, ttl?: number): Promise<void> {
		const stringValue =
			typeof value === "object" ? JSON.stringify(value) : value;
		if (ttl) {
			await this.client.setEx(key, ttl, stringValue);
		} else {
			await this.client.set(key, stringValue);
		}
	}

	async get(key: string): Promise<any> {
		const value = await this.client.get(key);
		try {
			return JSON.parse(value || "");
		} catch {
			return value;
		}
	}

	// ๐Ÿ‰ DragonDB specific optimizations
	async bulkSet(data: Record<string, any>): Promise<void> {
		const pipeline = this.client.multi();

		Object.entries(data).forEach(([key, value]) => {
			const stringValue =
				typeof value === "object" ? JSON.stringify(value) : value;
			pipeline.set(key, stringValue);
		});

		await pipeline.exec();
	}

	async healthCheck(): Promise<{ status: string; memory: string }> {
		try {
			const ping = await this.client.ping();
			const info = await this.client.info("memory");

			return {
				status: ping === "PONG" ? "healthy" : "unhealthy",
				memory:
					info
						.split("\n")
						.find((line) => line.startsWith("used_memory_human:"))
						?.split(":")[1] || "unknown",
			};
		} catch (error) {
			return { status: "unhealthy", memory: "unknown" };
		}
	}

	async connect() {
		await this.client.connect();
	}

	async disconnect() {
		await this.client.disconnect();
	}
}

// ๐Ÿš€ Usage
export const dragondb = new DragonDBClient();

๐Ÿ”„ Migration from Redis to DragonDB

// ๐Ÿ”„ Simple migration script
import { redis } from "./redis-client";
import { dragondb } from "./dragondb-client";

export class RedToDragonMigration {
	async migrateData(keyPattern: string = "*"): Promise<void> {
		console.log("๐Ÿ”„ Starting Redis โ†’ DragonDB migration...");

		try {
			// Connect to both databases
			await redis.connect();
			await dragondb.connect();

			// Get all keys matching pattern
			const keys = await redis.client.keys(keyPattern);
			console.log(`๐Ÿ“Š Found ${keys.length} keys to migrate`);

			// Migrate in batches
			const batchSize = 1000;
			for (let i = 0; i < keys.length; i += batchSize) {
				const batch = keys.slice(i, i + batchSize);
				const migration = batch.map(async (key) => {
					const value = await redis.get(key);
					const ttl = await redis.client.ttl(key);

					if (ttl > 0) {
						await dragondb.client.setEx(key, ttl, value);
					} else {
						await dragondb.set(key, value);
					}
				});

				await Promise.all(migration);
				console.log(
					`โœ… Migrated batch ${
						Math.floor(i / batchSize) + 1
					}/${Math.ceil(keys.length / batchSize)}`,
				);
			}

			console.log("๐ŸŽ‰ Migration completed successfully!");
		} catch (error) {
			console.error("โŒ Migration failed:", error);
			throw error;
		} finally {
			await redis.disconnect();
			await dragondb.disconnect();
		}
	}
}

๐Ÿ“‹ Prerequisites

๐Ÿ—๏ธ Infrastructure Requirements

graph TD
    A[๐Ÿ“‹ Prerequisites] --> B[โ˜๏ธ AWS Setup]
    A --> C[๐Ÿ’ป Local Tools]
    A --> D[๐Ÿง  Knowledge]

    B --> B1[โœ… AWS CLI configured]
    B --> B2[๐ŸŒ VPC with subnets]
    B --> B3[๐Ÿ”’ Security groups]
    B --> B4[๐Ÿ”‘ EC2 Key pairs]

    C --> C1[๐Ÿ“ฆ Node.js 18+]
    C --> C2[๐Ÿ› ๏ธ TypeScript]
    C --> C3[๐Ÿ“ก SSH client]
    C --> C4[๐Ÿณ Docker optional]

    D --> D1[โšก Redis basics]
    D --> D2[โ˜๏ธ AWS EC2]
    D --> D3[๐Ÿง Linux commands]
    D --> D4[๐Ÿ’ป Node.js/TS]

    style A fill:#e1f5fe
    style B fill:#fff3e0
    style C fill:#e8f5e8
    style D fill:#fce4ec
Loading

โ˜๏ธ AWS Account Setup

Requirement Description How to Check
๐Ÿ”ง AWS CLI Configured with appropriate permissions aws sts get-caller-identity
๐ŸŒ VPC With public/private subnets aws ec2 describe-vpcs
๐Ÿ”’ Security Groups Configured for Redis traffic aws ec2 describe-security-groups
๐Ÿ”‘ Key Pair For EC2 SSH access aws ec2 describe-key-pairs

๐Ÿ’ป Local Development Setup

# ๐Ÿš€ Quick prerequisites check script
#!/bin/bash

echo "๐Ÿ“‹ Checking prerequisites..."

# Check Node.js
if command -v node &> /dev/null; then
    echo "โœ… Node.js: $(node --version)"
else
    echo "โŒ Node.js not found. Install from https://nodejs.org"
fi

# Check npm/yarn
if command -v npm &> /dev/null; then
    echo "โœ… npm: $(npm --version)"
elif command -v yarn &> /dev/null; then
    echo "โœ… yarn: $(yarn --version)"
else
    echo "โŒ Package manager not found"
fi

# Check TypeScript
if command -v tsc &> /dev/null; then
    echo "โœ… TypeScript: $(tsc --version)"
else
    echo "โš ๏ธ TypeScript not found. Install: npm install -g typescript"
fi

# Check AWS CLI
if command -v aws &> /dev/null; then
    echo "โœ… AWS CLI: $(aws --version)"
    aws sts get-caller-identity &> /dev/null && echo "โœ… AWS credentials configured" || echo "โŒ AWS credentials not configured"
else
    echo "โŒ AWS CLI not found. Install from https://aws.amazon.com/cli/"
fi

echo "๐Ÿ“‹ Prerequisites check complete!"

๐Ÿง  Knowledge Requirements

๐ŸŸข Beginner Level (Single Instance)

  • Basic Redis commands (SET, GET, HSET)
  • Understanding of key-value stores
  • Basic Node.js/TypeScript

๐ŸŸก Intermediate Level (Redis Cluster)

  • Redis clustering concepts
  • AWS EC2 management
  • Linux system administration
  • Network configuration

๐Ÿ”ด Advanced Level (Production)

  • DevOps practices
  • Monitoring and alerting
  • Security hardening
  • Performance tuning

โ˜๏ธ EC2 Infrastructure

๐ŸŽฏ Instance Sizing Guide

graph TD
    A[Choose Instance Size] --> B{Application Type}

    B -->|๐Ÿงช Development/Testing| C[t3.medium<br/>2 vCPU, 4GB RAM<br/>๐Ÿ’ฐ $24/month]
    B -->|๐Ÿ“ฑ Small Production| D[r5.large<br/>2 vCPU, 16GB RAM<br/>๐Ÿ’ฐ $121/month]
    B -->|๐Ÿš€ High Performance| E[r6i.xlarge<br/>4 vCPU, 32GB RAM<br/>๐Ÿ’ฐ $302/month]
    B -->|๐Ÿข Enterprise| F[r6i.2xlarge<br/>8 vCPU, 64GB RAM<br/>๐Ÿ’ฐ $605/month]

    style C fill:#e8f5e8
    style D fill:#fff3e0
    style E fill:#ffcdd2
    style F fill:#f3e5f5
Loading

๐Ÿ”ง Instance Configuration

๐Ÿงช Development/Testing Setup

# Perfect for learning and development
Instance Type: t3.medium
vCPUs: 2
Memory: 4GB
Storage: 20GB gp3 SSD
Network: Up to 5 Gbps
Cost: ~$24/month

# Performance expectations:
- Operations/sec: ~10,000
- Concurrent connections: ~1,000
- Memory capacity: ~3GB usable

๐Ÿš€ Production High-Performance Setup

# Recommended for production workloads
Instance Type: r6i.xlarge
vCPUs: 4
Memory: 32GB
Storage: 100GB gp3 SSD (3000 IOPS)
Network: Up to 12.5 Gbps
Cost: ~$302/month

# Performance expectations:
- Operations/sec: ~200,000
- Concurrent connections: ~10,000
- Memory capacity: ~28GB usable
- Latency: <1ms for 99% requests

โš™๏ธ Installation Methods

๐ŸŽฏ Choose Your Installation Path

flowchart TD
    A[๐Ÿš€ Start Installation] --> B{What's your goal?}

    B -->|๐Ÿงช Quick Testing| C[๐Ÿ“ฆ Docker Method<br/>โฑ๏ธ 5 minutes]
    B -->|๐Ÿ”ด Single Instance| D[๐Ÿ’ป Direct Install<br/>โฑ๏ธ 15 minutes]
    B -->|๐ŸŸก Redis Cluster| E[๐Ÿ—๏ธ Cluster Setup<br/>โฑ๏ธ 2 hours]
    B -->|๐ŸŸข DragonDB| F[๐Ÿ‰ Modern Alternative<br/>โฑ๏ธ 30 minutes]

    C --> G[๐ŸŽ‰ Ready to Use!]
    D --> G
    E --> G
    F --> G

    style A fill:#e1f5fe
    style C fill:#e8f5e8
    style D fill:#ffebee
    style E fill:#fff3e0
    style F fill:#f1f8e9
    style G fill:#e8f5e8
Loading

๐Ÿ“ฆ Method 1: Docker (Fastest)

Perfect for: Testing, development, quick demos

#!/bin/bash
# ๐Ÿณ Docker Redis Stack Installation

echo "๐Ÿณ Installing Redis with Docker..."

# Pull Redis Stack image
docker pull redis/redis-stack:latest

# Run Redis Stack with all modules
docker run -d \
  --name redis-stack \
  --restart unless-stopped \
  -p 6379:6379 \
  -p 8001:8001 \
  -e REDIS_ARGS="--requirepass your-strong-password" \
  -v redis-data:/data \
  redis/redis-stack:latest

echo "โœ… Redis Stack is running!"
echo "๐Ÿ”Œ Redis: localhost:6379"
echo "๐ŸŽจ Redis Insight: http://localhost:8001"
echo "๐Ÿ”‘ Password: your-strong-password"

# Test connection
echo "๐Ÿงช Testing connection..."
docker exec redis-stack redis-cli ping

Docker Compose Setup:

# docker-compose.yml
version: "3.8"

services:
    redis-stack:
        image: redis/redis-stack:latest
        container_name: redis-stack
        restart: unless-stopped
        ports:
            - "6379:6379"
            - "8001:8001"
        environment:
            - REDIS_ARGS=--requirepass your-strong-password
        volumes:
            - redis-data:/data
            - ./redis.conf:/redis-stack.conf
        command: redis-stack-server /redis-stack.conf

volumes:
    redis-data:

๐Ÿ’ป Method 2: Direct Installation (Production Single Instance)

Perfect for: Production single instance, full control

#!/bin/bash
# ๐Ÿ”ด Production Redis Installation Script

set -e  # Exit on error

echo "๐Ÿ”ด Starting Production Redis Installation..."

# System updates and dependencies
sudo yum update -y
sudo yum groupinstall -y "Development Tools"
sudo yum install -y gcc gcc-c++ make wget curl

# System optimization
echo "โš™๏ธ Optimizing system parameters..."
sudo sysctl -w vm.overcommit_memory=1
sudo sysctl -w net.core.somaxconn=65535
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=65535

# Make settings permanent
cat << 'EOF' | sudo tee -a /etc/sysctl.conf
# Redis optimizations
vm.overcommit_memory = 1
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
vm.swappiness = 1
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
EOF

# Disable transparent huge pages
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag

# Install Redis Stack
echo "๐Ÿ“ฆ Installing Redis Stack..."
sudo yum install -y https://download.redis.io/redis-stack/redis-stack-server-7.2.0-v9.rhel7.x86_64.rpm

# Create Redis user and directories
sudo useradd --system --home /var/lib/redis --shell /bin/false redis
sudo mkdir -p /var/lib/redis /var/log/redis /etc/redis
sudo chown -R redis:redis /var/lib/redis /var/log/redis /etc/redis

# Generate strong password
REDIS_PASSWORD=$(openssl rand -base64 32)
echo "๐Ÿ”‘ Generated password: $REDIS_PASSWORD"
echo "$REDIS_PASSWORD" | sudo tee /etc/redis/password.txt
sudo chmod 600 /etc/redis/password.txt
sudo chown redis:redis /etc/redis/password.txt

# Create optimized configuration
sudo tee /etc/redis/redis.conf << EOF
# ๐Ÿ”ด Production Redis Configuration
port 6379
bind 0.0.0.0
protected-mode yes
requirepass $REDIS_PASSWORD

# Memory management
maxmemory 28gb
maxmemory-policy allkeys-lru
maxmemory-samples 5

# Persistence
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# Network and performance
tcp-keepalive 300
timeout 0
tcp-backlog 511
databases 16

# Logging
loglevel notice
logfile /var/log/redis/redis.log
syslog-enabled yes

# Security
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command KEYS ""
rename-command CONFIG "CONFIG_$REDIS_PASSWORD"

# Redis Stack modules
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisjson.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/redisbloom.so
loadmodule /opt/redis-stack/lib/redisgraph.so
EOF

# Create systemd service
sudo tee /etc/systemd/system/redis.service << 'EOF'
[Unit]
Description=Redis In-Memory Data Store
After=network.target

[Service]
User=redis
Group=redis
ExecStart=/opt/redis-stack/bin/redis-server /etc/redis/redis.conf
ExecReload=/bin/kill -USR2 $MAINPID
TimeoutStopSec=0
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
EOF

# Start Redis
sudo systemctl daemon-reload
sudo systemctl enable redis
sudo systemctl start redis

# Verify installation
echo "๐Ÿงช Verifying installation..."
sudo systemctl status redis
redis-cli -a "$REDIS_PASSWORD" ping

echo "โœ… Redis installation completed successfully!"
echo "๐Ÿ”Œ Connection: localhost:6379"
echo "๐Ÿ”‘ Password saved in: /etc/redis/password.txt"
echo "๐Ÿ“‹ Configuration: /etc/redis/redis.conf"
echo "๐Ÿ“Š Logs: /var/log/redis/redis.log"

๐Ÿ—๏ธ Method 3: Redis Cluster Installation

Perfect for: High availability, horizontal scaling, enterprise

#!/bin/bash
# ๐ŸŸก Redis Cluster Installation Script

set -e

CLUSTER_NODES=6
REDIS_PASSWORD=$(openssl rand -base64 32)

echo "๐ŸŸก Installing Redis Cluster ($CLUSTER_NODES nodes)..."
echo "๐Ÿ”‘ Generated cluster password: $REDIS_PASSWORD"

# Function to create node configuration
create_node_config() {
    local port=$1
    local node_dir="/var/lib/redis/cluster/$port"

    sudo mkdir -p "$node_dir"
    sudo tee "/etc/redis/redis-$port.conf" << EOF
# Redis Cluster Node $port Configuration
port $port
bind 0.0.0.0
protected-mode yes
requirepass $REDIS_PASSWORD
masterauth $REDIS_PASSWORD

# Cluster settings
cluster-enabled yes
cluster-config-file nodes-$port.conf
cluster-node-timeout 15000
cluster-announce-ip $(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
cluster-announce-port $port
cluster-announce-bus-port $((port + 10000))

# Memory and performance
maxmemory 4gb
maxmemory-policy allkeys-lru
tcp-keepalive 300
timeout 0

# Persistence
appendonly yes
appendfilename "appendonly-$port.aof"
appendfsync everysec

# Logging
logfile /var/log/redis/redis-$port.log
loglevel notice

# Working directory
dir $node_dir

# Redis Stack modules
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisjson.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
EOF

    # Create systemd service for this node
    sudo tee "/etc/systemd/system/redis-$port.service" << EOF
[Unit]
Description=Redis Cluster Node $port
After=network.target

[Service]
Type=notify
User=redis
Group=redis
ExecStart=/opt/redis-stack/bin/redis-server /etc/redis/redis-$port.conf
ExecReload=/bin/kill -USR2 \$MAINPID
TimeoutStopSec=0
Restart=always
RestartSec=3

[Install]
WantedBy=multi-user.target
EOF
}

# Install Redis Stack (same as single instance)
echo "๐Ÿ“ฆ Installing Redis Stack..."
sudo yum install -y https://download.redis.io/redis-stack/redis-stack-server-7.2.0-v9.rhel7.x86_64.rpm

# Create Redis user and directories
sudo useradd --system --home /var/lib/redis --shell /bin/false redis 2>/dev/null || true
sudo mkdir -p /var/lib/redis/cluster /var/log/redis /etc/redis
sudo chown -R redis:redis /var/lib/redis /var/log/redis /etc/redis

# Create cluster nodes
echo "๐Ÿ—๏ธ Creating cluster nodes..."
for port in 7000 7001 7002 7003 7004 7005; do
    echo "Creating node on port $port..."
    create_node_config $port
    sudo systemctl enable redis-$port
    sudo systemctl start redis-$port
done

# Wait for nodes to start
echo "โณ Waiting for nodes to start..."
sleep 10

# Create cluster
echo "๐Ÿ”— Creating cluster..."
LOCAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
redis-cli --cluster create \
    $LOCAL_IP:7000 $LOCAL_IP:7001 $LOCAL_IP:7002 \
    $LOCAL_IP:7003 $LOCAL_IP:7004 $LOCAL_IP:7005 \
    --cluster-replicas 1 \
    -a "$REDIS_PASSWORD" \
    --cluster-yes

echo "โœ… Redis Cluster installation completed!"
echo "๐Ÿ”Œ Cluster nodes: $LOCAL_IP:7000-7005"
echo "๐Ÿ”‘ Password: $REDIS_PASSWORD"
echo "๐Ÿ“‹ Test cluster: redis-cli -c -p 7000 -a '$REDIS_PASSWORD' cluster info"

# Save cluster info
cat << EOF | sudo tee /etc/redis/cluster-info.txt
Redis Cluster Information
========================
Password: $REDIS_PASSWORD
Nodes: $LOCAL_IP:7000, $LOCAL_IP:7001, $LOCAL_IP:7002, $LOCAL_IP:7003, $LOCAL_IP:7004, $LOCAL_IP:7005

Connection examples:
redis-cli -c -p 7000 -a '$REDIS_PASSWORD'
redis-cli -c -h $LOCAL_IP -p 7000 -a '$REDIS_PASSWORD'
EOF

sudo chmod 600 /etc/redis/cluster-info.txt
sudo chown redis:redis /etc/redis/cluster-info.txt

๐Ÿ‰ Method 4: DragonDB Installation

#!/bin/bash
# ๐ŸŸข DragonDB Installation Script

echo "๐Ÿ‰ Installing DragonDB..."

# Install Rust (required for DragonDB)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source ~/.cargo/env

# Download and install DragonDB
DRAGONDB_VERSION="v1.12.0"
wget "https://github.com/dragonflydb/dragonfly/releases/download/$DRAGONDB_VERSION/dragonfly-x86_64.tar.gz"
tar -xzf dragonfly-x86_64.tar.gz
sudo mv dragonfly /usr/local/bin/
sudo chmod +x /usr/local/bin/dragonfly

# Create DragonDB user and directories
sudo useradd --system --home /var/lib/dragondb --shell /bin/false dragondb 2>/dev/null || true
sudo mkdir -p /var/lib/dragondb /var/log/dragondb /etc/dragondb
sudo chown -R dragondb:dragondb /var/lib/dragondb /var/log/dragondb /etc/dragondb

# Generate password
DRAGONDB_PASSWORD=$(openssl rand -base64 32)
echo "๐Ÿ”‘ Generated DragonDB password: $DRAGONDB_PASSWORD"

# Create configuration
sudo tee /etc/dragondb/dragonfly.conf << EOF
# ๐ŸŸข DragonDB Configuration
port 6379
bind 0.0.0.0
requirepass $DRAGONDB_PASSWORD

# Memory settings
maxmemory 28gb
maxmemory_policy allkeys_lru

# Performance (DragonDB advantage)
proactor_threads 8
tcp_keepalive 300

# Persistence
save_schedule "*:*"
dir /var/lib/dragondb

# Logging
logfile /var/log/dragondb/dragonfly.log
loglevel 1
EOF

# Create systemd service
sudo tee /etc/systemd/system/dragondb.service << 'EOF'
[Unit]
Description=DragonDB Server
After=network.target

[Service]
Type=simple
User=dragondb
Group=dragondb
ExecStart=/usr/local/bin/dragonfly --flagfile=/etc/dragondb/dragonfly.conf
Restart=always
RestartSec=3
KillMode=mixed
KillSignal=SIGTERM

[Install]
WantedBy=multi-user.target
EOF

# Start DragonDB
sudo systemctl daemon-reload
sudo systemctl enable dragondb
sudo systemctl start dragondb

# Test connection
echo "๐Ÿงช Testing DragonDB connection..."
redis-cli -a "$DRAGONDB_PASSWORD" ping

echo "โœ… DragonDB installation completed!"
echo "๐Ÿ”Œ Connection: localhost:6379"
echo "๐Ÿ”‘ Password: $DRAGONDB_PASSWORD"
echo "๐Ÿ“Š Performance: Multi-threaded Rust power!"

# Save connection info
echo "$DRAGONDB_PASSWORD" | sudo tee /etc/dragondb/password.txt
sudo chmod 600 /etc/dragondb/password.txt
sudo chown dragondb:dragondb /etc/dragondb/password.txt

๐Ÿ’ป Node.js Integration

๐ŸŽฏ Complete TypeScript Integration Guide

graph TB
    A[๐Ÿ“ฆ Project Setup] --> B[โš™๏ธ Configuration]
    B --> C[๐Ÿ”— Connection Layer]
    C --> D[๐Ÿ› ๏ธ Service Layer]
    D --> E[๐ŸŽฏ Usage Examples]
    E --> F[๐Ÿ“Š Performance Monitoring]

    style A fill:#e3f2fd
    style B fill:#fff3e0
    style C fill:#e8f5e8
    style D fill:#fce4ec
    style E fill:#f3e5f5
    style F fill:#e0f2f1
Loading

๐Ÿ“ฆ Project Setup

# ๐Ÿš€ Initialize new Node.js project
mkdir redis-integration && cd redis-integration
npm init -y

# Install dependencies
npm install redis ioredis @types/redis
npm install -D typescript @types/node ts-node nodemon

# Install additional utilities
npm install dotenv joi compression helmet cors express
npm install -D @types/express @types/cors @types/compression

# Create TypeScript configuration
npx tsc --init --target es2020 --module commonjs --strict --esModuleInterop

Project Structure:

redis-integration/
โ”œโ”€โ”€ src/
โ”‚   โ”œโ”€โ”€ config/
โ”‚   โ”‚   โ”œโ”€โ”€ redis.ts           # ๐Ÿ”ง Redis configuration
โ”‚   โ”‚   โ””โ”€โ”€ database.ts        # ๐Ÿ“Š Database settings
โ”‚   โ”œโ”€โ”€ services/
โ”‚   โ”‚   โ”œโ”€โ”€ RedisService.ts    # ๐Ÿ”ด Single instance service
โ”‚   โ”‚   โ”œโ”€โ”€ ClusterService.ts  # ๐ŸŸก Cluster service
โ”‚   โ”‚   โ””โ”€โ”€ DragonService.ts   # ๐ŸŸข DragonDB service
โ”‚   โ”œโ”€โ”€ middleware/
โ”‚   โ”‚   โ”œโ”€โ”€ cache.ts           # ๐Ÿš€ Caching middleware
โ”‚   โ”‚   โ””โ”€โ”€ rate-limit.ts      # ๐Ÿ›ก๏ธ Rate limiting
โ”‚   โ”œโ”€โ”€ utils/
โ”‚   โ”‚   โ”œโ”€โ”€ performance.ts     # ๐Ÿ“Š Performance monitoring
โ”‚   โ”‚   โ””โ”€โ”€ health.ts          # ๐Ÿฅ Health checks
โ”‚   โ””โ”€โ”€ examples/
โ”‚       โ”œโ”€โ”€ basic-usage.ts     # ๐ŸŽฏ Basic examples
โ”‚       โ”œโ”€โ”€ advanced.ts        # ๐Ÿš€ Advanced patterns
โ”‚       โ””โ”€โ”€ real-world.ts      # ๐ŸŒ Real-world scenarios
โ”œโ”€โ”€ .env                       # ๐Ÿ” Environment variables
โ”œโ”€โ”€ package.json
โ””โ”€โ”€ tsconfig.json

๐Ÿ”ง Configuration Layer

// src/config/redis.ts
import { RedisOptions } from "ioredis";
import * as dotenv from "dotenv";

dotenv.config();

export interface DatabaseConfig {
	type: "single" | "cluster" | "dragondb";
	host?: string;
	port?: number;
	password?: string;
	nodes?: Array<{ host: string; port: number }>;
	maxRetriesPerRequest?: number;
	retryDelayOnFailover?: number;
	enableAutoPipelining?: boolean;
	lazyConnect?: boolean;
}

// ๐Ÿ”ด Single Instance Configuration
export const singleInstanceConfig: DatabaseConfig = {
	type: "single",
	host: process.env.REDIS_HOST || "localhost",
	port: parseInt(process.env.REDIS_PORT || "6379"),
	password: process.env.REDIS_PASSWORD,
	maxRetriesPerRequest: 3,
	retryDelayOnFailover: 100,
	enableAutoPipelining: true,
	lazyConnect: true,
};

// ๐ŸŸก Cluster Configuration
export const clusterConfig: DatabaseConfig = {
	type: "cluster",
	nodes: [
		{ host: process.env.REDIS_NODE_1 || "localhost", port: 7000 },
		{ host: process.env.REDIS_NODE_2 || "localhost", port: 7001 },
		{ host: process.env.REDIS_NODE_3 || "localhost", port: 7002 },
		{ host: process.env.REDIS_NODE_4 || "localhost", port: 7003 },
		{ host: process.env.REDIS_NODE_5 || "localhost", port: 7004 },
		{ host: process.env.REDIS_NODE_6 || "localhost", port: 7005 },
	],
	password: process.env.REDIS_PASSWORD,
	maxRetriesPerRequest: 3,
	retryDelayOnFailover: 100,
	enableAutoPipelining: true,
};

// ๐ŸŸข DragonDB Configuration
export const dragondbConfig: DatabaseConfig = {
	type: "dragondb",
	host: process.env.DRAGONDB_HOST || "localhost",
	port: parseInt(process.env.DRAGONDB_PORT || "6379"),
	password: process.env.DRAGONDB_PASSWORD,
	maxRetriesPerRequest: 3,
	retryDelayOnFailover: 50, // Faster recovery
	enableAutoPipelining: true,
	lazyConnect: true,
};

// Environment validation
export const validateConfig = (): void => {
	const requiredEnvVars = ["REDIS_PASSWORD"];

	for (const envVar of requiredEnvVars) {
		if (!process.env[envVar]) {
			throw new Error(`Missing required environment variable: ${envVar}`);
		}
	}
};

๐Ÿ”— Universal Connection Manager

// src/services/ConnectionManager.ts
import Redis, { Cluster } from "ioredis";
import { DatabaseConfig } from "../config/redis";

export class ConnectionManager {
	private static instance: ConnectionManager;
	private connections: Map<string, Redis | Cluster> = new Map();

	public static getInstance(): ConnectionManager {
		if (!ConnectionManager.instance) {
			ConnectionManager.instance = new ConnectionManager();
		}
		return ConnectionManager.instance;
	}

	async createConnection(
		name: string,
		config: DatabaseConfig,
	): Promise<Redis | Cluster> {
		if (this.connections.has(name)) {
			return this.connections.get(name)!;
		}

		let client: Redis | Cluster;

		switch (config.type) {
			case "single":
			case "dragondb":
				client = new Redis({
					host: config.host,
					port: config.port,
					password: config.password,
					maxRetriesPerRequest: config.maxRetriesPerRequest,
					retryDelayOnFailover: config.retryDelayOnFailover,
					enableAutoPipelining: config.enableAutoPipelining,
					lazyConnect: config.lazyConnect,
					// Connection pool settings
					family: 4,
					keepAlive: true,
					maxLoadingTimeout: 5000,
					// Performance optimizations
					enableReadyCheck: true,
					maxRetriesPerRequest: 3,
					// Event handlers
					retryDelayOnClusterDown: 300,
					enableOfflineQueue: false,
				});
				break;

			case "cluster":
				client = new Redis.Cluster(config.nodes!, {
					redisOptions: {
						password: config.password,
						family: 4,
						keepAlive: true,
						maxLoadingTimeout: 5000,
					},
					maxRetriesPerRequest: config.maxRetriesPerRequest,
					retryDelayOnFailover: config.retryDelayOnFailover,
					enableAutoPipelining: config.enableAutoPipelining,
					// Cluster-specific settings
					scaleReads: "slave",
					enableOfflineQueue: false,
					redisOptions: {
						password: config.password,
						connectTimeout: 10000,
						commandTimeout: 5000,
					},
				});
				break;

			default:
				throw new Error(`Unsupported database type: ${config.type}`);
		}

		// Add event listeners
		this.setupEventListeners(client, name);

		// Connect and test
		await client.ping();

		this.connections.set(name, client);
		console.log(`โœ… ${name} connection established (${config.type})`);

		return client;
	}

	private setupEventListeners(client: Redis | Cluster, name: string): void {
		client.on("connect", () => {
			console.log(`๐Ÿ”— ${name}: Connected`);
		});

		client.on("ready", () => {
			console.log(`โœ… ${name}: Ready`);
		});

		client.on("error", (error) => {
			console.error(`โŒ ${name}: Error -`, error.message);
		});

		client.on("close", () => {
			console.log(`๐Ÿ”Œ ${name}: Connection closed`);
		});

		client.on("reconnecting", () => {
			console.log(`๐Ÿ”„ ${name}: Reconnecting...`);
		});

		if (client instanceof Redis.Cluster) {
			client.on("node error", (error, node) => {
				console.error(
					`โŒ ${name}: Node error on ${node.options.host}:${node.options.port} -`,
					error.message,
				);
			});
		}
	}

	getConnection(name: string): Redis | Cluster | undefined {
		return this.connections.get(name);
	}

	async closeConnection(name: string): Promise<void> {
		const connection = this.connections.get(name);
		if (connection) {
			await connection.quit();
			this.connections.delete(name);
			console.log(`๐Ÿ”Œ ${name}: Connection closed and removed`);
		}
	}

	async closeAllConnections(): Promise<void> {
		const closePromises = Array.from(this.connections.keys()).map((name) =>
			this.closeConnection(name),
		);
		await Promise.all(closePromises);
		console.log("๐Ÿ”Œ All connections closed");
	}

	getConnectionStatus(): Record<string, string> {
		const status: Record<string, string> = {};
		this.connections.forEach((connection, name) => {
			status[name] = connection.status;
		});
		return status;
	}
}

๐Ÿ› ๏ธ Universal Database Service

// src/services/UniversalRedisService.ts
import { ConnectionManager } from "./ConnectionManager";
import { DatabaseConfig } from "../config/redis";
import Redis, { Cluster } from "ioredis";

export interface CacheOptions {
	ttl?: number;
	nx?: boolean; // Set only if not exists
	xx?: boolean; // Set only if exists
}

export interface HashField {
	[field: string]: string | number;
}

export interface ZSetMember {
	score: number;
	member: string;
}

export class UniversalRedisService {
	private connection: Redis | Cluster;
	private connectionManager: ConnectionManager;

	constructor(
		private config: DatabaseConfig,
		private connectionName: string = "default",
	) {
		this.connectionManager = ConnectionManager.getInstance();
	}

	async initialize(): Promise<void> {
		this.connection = await this.connectionManager.createConnection(
			this.connectionName,
			this.config,
		);
	}

	// ๐ŸŽฏ String Operations
	async set(
		key: string,
		value: any,
		options?: CacheOptions,
	): Promise<"OK" | null> {
		const stringValue =
			typeof value === "object" ? JSON.stringify(value) : String(value);

		if (options?.ttl) {
			return await this.connection.setex(key, options.ttl, stringValue);
		}

		if (options?.nx) {
			return await this.connection.set(key, stringValue, "NX");
		}

		if (options?.xx) {
			return await this.connection.set(key, stringValue, "XX");
		}

		return await this.connection.set(key, stringValue);
	}

	async get(key: string): Promise<any> {
		const value = await this.connection.get(key);
		if (value === null) return null;

		try {
			return JSON.parse(value);
		} catch {
			return value;
		}
	}

	async mget(keys: string[]): Promise<(any | null)[]> {
		const values = await this.connection.mget(...keys);
		return values.map((value) => {
			if (value === null) return null;
			try {
				return JSON.parse(value);
			} catch {
				return value;
			}
		});
	}

	async del(key: string | string[]): Promise<number> {
		if (Array.isArray(key)) {
			return await this.connection.del(...key);
		}
		return await this.connection.del(key);
	}

	async exists(key: string | string[]): Promise<number> {
		if (Array.isArray(key)) {
			return await this.connection.exists(...key);
		}
		return await this.connection.exists(key);
	}

	async expire(key: string, seconds: number): Promise<number> {
		return await this.connection.expire(key, seconds);
	}

	async ttl(key: string): Promise<number> {
		return await this.connection.ttl(key);
	}

	// ๐Ÿ  Hash Operations
	async hset(
		key: string,
		field: string | HashField,
		value?: string | number,
	): Promise<number> {
		if (typeof field === "object") {
			return await this.connection.hset(key, field);
		}
		return await this.connection.hset(key, field, String(value));
	}

	async hget(key: string, field: string): Promise<string | null> {
		return await this.connection.hget(key, field);
	}

	async hgetall(key: string): Promise<Record<string, string>> {
		return await this.connection.hgetall(key);
	}

	async hmget(key: string, fields: string[]): Promise<(string | null)[]> {
		return await this.connection.hmget(key, ...fields);
	}

	async hdel(key: string, fields: string | string[]): Promise<number> {
		if (Array.isArray(fields)) {
			return await this.connection.hdel(key, ...fields);
		}
		return await this.connection.hdel(key, fields);
	}

	async hexists(key: string, field: string): Promise<number> {
		return await this.connection.hexists(key, field);
	}

	// ๐Ÿ“ List Operations
	async lpush(key: string, values: (string | number)[]): Promise<number> {
		return await this.connection.lpush(key, ...values.map(String));
	}

	async rpush(key: string, values: (string | number)[]): Promise<number> {
		return await this.connection.rpush(key, ...values.map(String));
	}

	async lpop(key: string): Promise<string | null> {
		return await this.connection.lpop(key);
	}

	async rpop(key: string): Promise<string | null> {
		return await this.connection.rpop(key);
	}

	async lrange(key: string, start: number, stop: number): Promise<string[]> {
		return await this.connection.lrange(key, start, stop);
	}

	async llen(key: string): Promise<number> {
		return await this.connection.llen(key);
	}

	// ๐ŸŽฏ Set Operations
	async sadd(key: string, members: (string | number)[]): Promise<number> {
		return await this.connection.sadd(key, ...members.map(String));
	}

	async smembers(key: string): Promise<string[]> {
		return await this.connection.smembers(key);
	}

	async srem(key: string, members: (string | number)[]): Promise<number> {
		return await this.connection.srem(key, ...members.map(String));
	}

	async sismember(key: string, member: string | number): Promise<number> {
		return await this.connection.sismember(key, String(member));
	}

	async scard(key: string): Promise<number> {
		return await this.connection.scard(key);
	}

	// ๐Ÿ“Š Sorted Set Operations
	async zadd(key: string, members: ZSetMember[]): Promise<number> {
		const args: (string | number)[] = [];
		members.forEach((member) => {
			args.push(member.score, member.member);
		});
		return await this.connection.zadd(key, ...args);
	}

	async zrange(
		key: string,
		start: number,
		stop: number,
		withScores?: boolean,
	): Promise<string[]> {
		if (withScores) {
			return await this.connection.zrange(key, start, stop, "WITHSCORES");
		}
		return await this.connection.zrange(key, start, stop);
	}

	async zrevrange(
		key: string,
		start: number,
		stop: number,
		withScores?: boolean,
	): Promise<string[]> {
		if (withScores) {
			return await this.connection.zrevrange(
				key,
				start,
				stop,
				"WITHSCORES",
			);
		}
		return await this.connection.zrevrange(key, start, stop);
	}

	async zrem(key: string, members: (string | number)[]): Promise<number> {
		return await this.connection.zrem(key, ...members.map(String));
	}

	async zscore(key: string, member: string | number): Promise<string | null> {
		return await this.connection.zscore(key, String(member));
	}

	async zcard(key: string): Promise<number> {
		return await this.connection.zcard(key);
	}

	// ๐Ÿš€ Advanced Operations
	async pipeline(commands: Array<[string, ...any[]]>): Promise<any[]> {
		const pipeline = this.connection.pipeline();
		commands.forEach(([command, ...args]) => {
			(pipeline as any)[command](...args);
		});
		const results = await pipeline.exec();
		return results?.map((result) => result[1]) || [];
	}

	async transaction(commands: Array<[string, ...any[]]>): Promise<any[]> {
		const multi = this.connection.multi();
		commands.forEach(([command, ...args]) => {
			(multi as any)[command](...args);
		});
		const results = await multi.exec();
		return results?.map((result) => result[1]) || [];
	}

	// ๐Ÿ” Search Operations (if Redis Stack is available)
	async ftSearch(index: string, query: string, options?: any): Promise<any> {
		try {
			return await (this.connection as any).call(
				"FT.SEARCH",
				index,
				query,
				...(options || []),
			);
		} catch (error) {
			console.warn(
				"FT.SEARCH not available. Make sure Redis Stack is installed.",
			);
			throw error;
		}
	}

	// ๐Ÿ“Š JSON Operations (if RedisJSON is available)
	async jsonSet(key: string, path: string, value: any): Promise<string> {
		try {
			return await (this.connection as any).call(
				"JSON.SET",
				key,
				path,
				JSON.stringify(value),
			);
		} catch (error) {
			console.warn(
				"JSON.SET not available. Make sure RedisJSON module is loaded.",
			);
			throw error;
		}
	}

	async jsonGet(key: string, path?: string): Promise<any> {
		try {
			const result = await (this.connection as any).call(
				"JSON.GET",
				key,
				path || ".",
			);
			return JSON.parse(result);
		} catch (error) {
			console.warn(
				"JSON.GET not available. Make sure RedisJSON module is loaded.",
			);
			throw error;
		}
	}

	// ๐Ÿฅ Health and Monitoring
	async ping(): Promise<string> {
		return await this.connection.ping();
	}

	async info(section?: string): Promise<string> {
		return await this.connection.info(section);
	}

	async memory(subcommand: string, ...args: any[]): Promise<any> {
		return await this.connection.memory(subcommand, ...args);
	}

	async dbsize(): Promise<number> {
		return await this.connection.dbsize();
	}

	async flushdb(): Promise<"OK"> {
		return await this.connection.flushdb();
	}

	// Cluster-specific operations
	async clusterInfo(): Promise<string> {
		if (this.connection instanceof Redis.Cluster) {
			return await this.connection.cluster("info");
		}
		throw new Error("Cluster operations only available in cluster mode");
	}

	async clusterNodes(): Promise<string> {
		if (this.connection instanceof Redis.Cluster) {
			return await this.connection.cluster("nodes");
		}
		throw new Error("Cluster operations only available in cluster mode");
	}

	// ๐Ÿ”Œ Connection Management
	async disconnect(): Promise<void> {
		await this.connectionManager.closeConnection(this.connectionName);
	}

	getConnectionStatus(): string {
		return this.connection.status;
	}

	// Performance monitoring helper
	async measureOperation<T>(
		operation: () => Promise<T>,
		operationName: string,
	): Promise<T> {
		const start = Date.now();
		try {
			const result = await operation();
			const duration = Date.now() - start;
			console.log(`โšก ${operationName}: ${duration}ms`);
			return result;
		} catch (error) {
			const duration = Date.now() - start;
			console.error(
				`โŒ ${operationName} failed after ${duration}ms:`,
				error,
			);
			throw error;
		}
	}
}

๐ŸŽฏ Real-World Usage Examples

// src/examples/real-world-scenarios.ts
import { UniversalRedisService } from "../services/UniversalRedisService";
import {
	singleInstanceConfig,
	clusterConfig,
	dragondbConfig,
} from "../config/redis";

// ๐Ÿš€ Factory function to create service based on environment
export function createRedisService(
	type?: "single" | "cluster" | "dragondb",
): UniversalRedisService {
	const config =
		type === "cluster"
			? clusterConfig
			: type === "dragondb"
			? dragondbConfig
			: singleInstanceConfig;

	return new UniversalRedisService(config, `${type || "single"}-connection`);
}

// ๐Ÿ“ฑ Example 1: Session Management
export class SessionManager {
	private redis: UniversalRedisService;

	constructor(redisService: UniversalRedisService) {
		this.redis = redisService;
	}

	async createSession(userId: string, sessionData: any): Promise<string> {
		const sessionId = `session:${userId}:${Date.now()}`;
		const sessionKey = `user_sessions:${sessionId}`;

		await this.redis.set(
			sessionKey,
			{
				userId,
				createdAt: new Date().toISOString(),
				...sessionData,
			},
			{ ttl: 3600 },
		); // 1 hour

		// Add to user's active sessions set
		await this.redis.sadd(`user:${userId}:sessions`, [sessionId]);

		return sessionId;
	}

	async getSession(sessionId: string): Promise<any> {
		return await this.redis.get(`user_sessions:${sessionId}`);
	}

	async updateSessionActivity(sessionId: string): Promise<void> {
		const sessionKey = `user_sessions:${sessionId}`;
		const session = await this.redis.get(sessionKey);

		if (session) {
			session.lastActivity = new Date().toISOString();
			await this.redis.set(sessionKey, session, { ttl: 3600 });
		}
	}

	async destroySession(sessionId: string): Promise<void> {
		const session = await this.redis.get(`user_sessions:${sessionId}`);
		if (session) {
			await this.redis.del(`user_sessions:${sessionId}`);
			await this.redis.srem(`user:${session.userId}:sessions`, [
				sessionId,
			]);
		}
	}

	async getUserActiveSessions(userId: string): Promise<string[]> {
		return await this.redis.smembers(`user:${userId}:sessions`);
	}
}

// ๐Ÿ›’ Example 2: E-commerce Cart
export class ShoppingCart {
	private redis: UniversalRedisService;

	constructor(redisService: UniversalRedisService) {
		this.redis = redisService;
	}

	async addItem(
		userId: string,
		productId: string,
		quantity: number,
		price: number,
	): Promise<void> {
		const cartKey = `cart:${userId}`;

		// Add item to cart hash
		await this.redis.hset(cartKey, {
			[`${productId}:quantity`]: quantity,
			[`${productId}:price`]: price,
			[`${productId}:addedAt`]: Date.now(),
		});

		// Set cart expiration (30 days)
		await this.redis.expire(cartKey, 30 * 24 * 3600);

		// Update cart total
		await this.updateCartTotal(userId);
	}

	async removeItem(userId: string, productId: string): Promise<void> {
		const cartKey = `cart:${userId}`;
		await this.redis.hdel(cartKey, [
			`${productId}:quantity`,
			`${productId}:price`,
			`${productId}:addedAt`,
		]);
		await this.updateCartTotal(userId);
	}

	async getCart(userId: string): Promise<any> {
		const cartKey = `cart:${userId}`;
		const cartData = await this.redis.hgetall(cartKey);

		const cart = { items: [], total: 0 };
		const items: any = {};

		Object.entries(cartData).forEach(([key, value]) => {
			const [productId, field] = key.split(":");
			if (!items[productId]) items[productId] = { productId };

			if (field === "quantity")
				items[productId].quantity = parseInt(value);
			else if (field === "price")
				items[productId].price = parseFloat(value);
			else if (field === "addedAt")
				items[productId].addedAt = new Date(parseInt(value));
		});

		cart.items = Object.values(items);
		cart.total = cart.items.reduce(
			(sum: number, item: any) => sum + item.quantity * item.price,
			0,
		);

		return cart;
	}

	private async updateCartTotal(userId: string): Promise<void> {
		const cart = await this.getCart(userId);
		await this.redis.hset(`cart:${userId}`, { total: cart.total });
	}

	async clearCart(userId: string): Promise<void> {
		await this.redis.del(`cart:${userId}`);
	}
}

// ๐Ÿ“Š Example 3: Real-time Analytics
export class AnalyticsService {
	private redis: UniversalRedisService;

	constructor(redisService: UniversalRedisService) {
		this.redis = redisService;
	}

	async trackPageView(pageUrl: string, userId?: string): Promise<void> {
		const today = new Date().toISOString().split("T")[0];
		const hour = new Date().getHours();

		// Daily page views
		await this.redis.pipeline([
			["incr", `analytics:pageviews:${today}`],
			["incr", `analytics:pageviews:${today}:${hour}`],
			["incr", `analytics:page:${pageUrl}:${today}`],
			[
				"zadd",
				`analytics:popular_pages:${today}`,
				[{ score: Date.now(), member: pageUrl }],
			],
		]);

		// User tracking
		if (userId) {
			await this.redis.sadd(`analytics:daily_users:${today}`, [userId]);
			await this.redis.lpush(`analytics:user:${userId}:history`, [
				pageUrl,
			]);
		}

		// Real-time stats (expire in 1 hour)
		await this.redis.set(`analytics:realtime:current_hour`, hour, {
			ttl: 3600,
		});
	}

	async getDailyStats(date: string): Promise<any> {
		const [pageViews, uniqueUsers, popularPages] = await Promise.all([
			this.redis.get(`analytics:pageviews:${date}`),
			this.redis.scard(`analytics:daily_users:${date}`),
			this.redis.zrevrange(`analytics:popular_pages:${date}`, 0, 9, true),
		]);

		return {
			date,
			pageViews: parseInt(pageViews || "0"),
			uniqueUsers,
			popularPages: this.formatZSetResults(popularPages),
		};
	}

	async getHourlyStats(date: string): Promise<number[]> {
		const hourlyStats = [];
		for (let hour = 0; hour < 24; hour++) {
			const views = await this.redis.get(
				`analytics:pageviews:${date}:${hour}`,
			);
			hourlyStats.push(parseInt(views || "0"));
		}
		return hourlyStats;
	}

	private formatZSetResults(
		results: string[],
	): Array<{ page: string; score: number }> {
		const formatted = [];
		for (let i = 0; i < results.length; i += 2) {
			formatted.push({
				page: results[i],
				score: parseFloat(results[i + 1]),
			});
		}
		return formatted;
	}
}

// ๐ŸŽฎ Example 4: Gaming Leaderboard
export class LeaderboardService {
	private redis: UniversalRedisService;

	constructor(redisService: UniversalRedisService) {
		this.redis = redisService;
	}

	async addScore(
		gameId: string,
		playerId: string,
		score: number,
	): Promise<void> {
		const leaderboardKey = `leaderboard:${gameId}`;

		// Add to sorted set
		await this.redis.zadd(leaderboardKey, [{ score, member: playerId }]);

		// Store player info
		await this.redis.hset(`player:${playerId}`, {
			lastScore: score,
			lastPlayed: Date.now(),
			[`game:${gameId}:bestScore`]: score,
		});

		// Update daily leaderboard
		const today = new Date().toISOString().split("T")[0];
		await this.redis.zadd(`leaderboard:${gameId}:daily:${today}`, [
			{ score, member: playerId },
		]);
	}

	async getTopPlayers(gameId: string, limit: number = 10): Promise<any[]> {
		const results = await this.redis.zrevrange(
			`leaderboard:${gameId}`,
			0,
			limit - 1,
			true,
		);

		const leaderboard = [];
		for (let i = 0; i < results.length; i += 2) {
			const playerId = results[i];
			const score = parseInt(results[i + 1]);
			const playerInfo = await this.redis.hgetall(`player:${playerId}`);

			leaderboard.push({
				rank: Math.floor(i / 2) + 1,
				playerId,
				score,
				...playerInfo,
			});
		}

		return leaderboard;
	}

	async getPlayerRank(
		gameId: string,
		playerId: string,
	): Promise<number | null> {
		const rank = await this.redis.zrevrank(
			`leaderboard:${gameId}`,
			playerId,
		);
		return rank !== null ? rank + 1 : null;
	}

	async getPlayerScore(
		gameId: string,
		playerId: string,
	): Promise<number | null> {
		const score = await this.redis.zscore(
			`leaderboard:${gameId}`,
			playerId,
		);
		return score ? parseFloat(score) : null;
	}
}

// ๐Ÿ”” Example 5: Notification System
export class NotificationService {
	private redis: UniversalRedisService;

	constructor(redisService: UniversalRedisService) {
		this.redis = redisService;
	}

	async sendNotification(
		userId: string,
		notification: {
			id: string;
			title: string;
			message: string;
			type: "info" | "warning" | "error" | "success";
			data?: any;
		},
	): Promise<void> {
		const notificationKey = `notifications:${userId}`;
		const notificationData = {
			...notification,
			timestamp: Date.now(),
			read: false,
		};

		// Add to user's notification list
		await this.redis.lpush(notificationKey, [
			JSON.stringify(notificationData),
		]);

		// Keep only last 100 notifications
		await this.redis.ltrim(notificationKey, 0, 99);

		// Add to unread count
		await this.redis.incr(`notifications:${userId}:unread`);

		// Set expiration for cleanup (30 days)
		await this.redis.expire(notificationKey, 30 * 24 * 3600);
	}

	async getNotifications(
		userId: string,
		limit: number = 20,
		offset: number = 0,
	): Promise<any[]> {
		const notifications = await this.redis.lrange(
			`notifications:${userId}`,
			offset,
			offset + limit - 1,
		);

		return notifications.map((n) => JSON.parse(n));
	}

	async markAsRead(userId: string, notificationId: string): Promise<void> {
		const notifications = await this.getNotifications(userId, 100);
		const updatedNotifications = notifications.map((n) => {
			if (n.id === notificationId && !n.read) {
				n.read = true;
				this.redis.decr(`notifications:${userId}:unread`);
			}
			return JSON.stringify(n);
		});

		// Replace the list
		await this.redis.del(`notifications:${userId}`);
		if (updatedNotifications.length > 0) {
			await this.redis.lpush(
				`notifications:${userId}`,
				updatedNotifications,
			);
		}
	}

	async getUnreadCount(userId: string): Promise<number> {
		const count = await this.redis.get(`notifications:${userId}:unread`);
		return parseInt(count || "0");
	}

	async markAllAsRead(userId: string): Promise<void> {
		await this.redis.set(`notifications:${userId}:unread`, 0);
	}
}

// ๐Ÿš€ Usage Examples
async function demonstrateUsage() {
	// Initialize services
	const redisService = createRedisService("single"); // or 'cluster' or 'dragondb'
	await redisService.initialize();

	const sessionManager = new SessionManager(redisService);
	const cart = new ShoppingCart(redisService);
	const analytics = new AnalyticsService(redisService);
	const leaderboard = new LeaderboardService(redisService);
	const notifications = new NotificationService(redisService);

	try {
		// 1. Session management
		console.log("๐Ÿ” Creating user session...");
		const sessionId = await sessionManager.createSession("user123", {
			email: "user@example.com",
			role: "customer",
		});
		console.log(`Session created: ${sessionId}`);

		// 2. Shopping cart
		console.log("๐Ÿ›’ Adding items to cart...");
		await cart.addItem("user123", "product1", 2, 29.99);
		await cart.addItem("user123", "product2", 1, 49.99);
		const userCart = await cart.getCart("user123");
		console.log("Cart contents:", userCart);

		// 3. Analytics
		console.log("๐Ÿ“Š Tracking page views...");
		await analytics.trackPageView("/product/123", "user123");
		await analytics.trackPageView("/checkout", "user123");
		const stats = await analytics.getDailyStats(
			new Date().toISOString().split("T")[0],
		);
		console.log("Daily stats:", stats);

		// 4. Leaderboard
		console.log("๐ŸŽฎ Adding game scores...");
		await leaderboard.addScore("game1", "player1", 1500);
		await leaderboard.addScore("game1", "player2", 1200);
		const topPlayers = await leaderboard.getTopPlayers("game1", 5);
		console.log("Top players:", topPlayers);

		// 5. Notifications
		console.log("๐Ÿ”” Sending notifications...");
		await notifications.sendNotification("user123", {
			id: "notif1",
			title: "Welcome!",
			message: "Thanks for joining our platform",
			type: "success",
		});
		const userNotifications = await notifications.getNotifications(
			"user123",
		);
		console.log("User notifications:", userNotifications);
	} catch (error) {
		console.error("โŒ Error in demonstration:", error);
	} finally {
		await redisService.disconnect();
	}
}

// Export for use
export {
	SessionManager,
	ShoppingCart,
	AnalyticsService,
	LeaderboardService,
	NotificationService,
	demonstrateUsage,
};

๐ŸŽจ Express.js Middleware Integration

// src/middleware/cache.ts
import { Request, Response, NextFunction } from "express";
import { UniversalRedisService } from "../services/UniversalRedisService";
import crypto from "crypto";

interface CacheOptions {
	ttl?: number;
	keyGenerator?: (req: Request) => string;
	condition?: (req: Request) => boolean;
	skipCache?: (req: Request, res: Response) => boolean;
}

export function createCacheMiddleware(
	redisService: UniversalRedisService,
	options: CacheOptions = {},
) {
	const defaultOptions = {
		ttl: 300, // 5 minutes
		keyGenerator: (req: Request) => {
			const url = req.originalUrl || req.url;
			const method = req.method;
			const hash = crypto
				.createHash("md5")
				.update(`${method}:${url}`)
				.digest("hex");
			return `cache:${hash}`;
		},
		condition: () => true,
		skipCache: (req: Request) => req.method !== "GET",
	};

	const config = { ...defaultOptions, ...options };

	return async (req: Request, res: Response, next: NextFunction) => {
		// Skip cache for non-GET requests or based on condition
		if (config.skipCache(req, res) || !config.condition(req)) {
			return next();
		}

		const cacheKey = config.keyGenerator(req);

		try {
			// Try to get from cache
			const cachedData = await redisService.get(cacheKey);

			if (cachedData) {
				console.log(`๐ŸŽฏ Cache hit: ${cacheKey}`);
				return res.json(cachedData);
			}

			// Store the original json method
			const originalJson = res.json.bind(res);

			// Override the json method to cache the response
			res.json = function (data: any) {
				// Cache the response
				redisService
					.set(cacheKey, data, { ttl: config.ttl })
					.catch((err) => {
						console.error("โŒ Cache write error:", err);
					});

				console.log(`๐Ÿ’พ Cache miss, stored: ${cacheKey}`);
				return originalJson(data);
			};

			next();
		} catch (error) {
			console.error("โŒ Cache middleware error:", error);
			next(); // Continue without cache on error
		}
	};
}

// src/middleware/rate-limit.ts
export function createRateLimitMiddleware(
	redisService: UniversalRedisService,
	options: {
		windowMs: number;
		maxRequests: number;
		keyGenerator?: (req: Request) => string;
		onLimitReached?: (req: Request, res: Response) => void;
	},
) {
	const defaultKeyGenerator = (req: Request) => {
		return `rate_limit:${req.ip}:${Math.floor(
			Date.now() / options.windowMs,
		)}`;
	};

	const keyGenerator = options.keyGenerator || defaultKeyGenerator;

	return async (req: Request, res: Response, next: NextFunction) => {
		const key = keyGenerator(req);

		try {
			const current = await redisService.incr(key);

			if (current === 1) {
				// Set expiration for new key
				await redisService.expire(
					key,
					Math.ceil(options.windowMs / 1000),
				);
			}

			// Add headers
			res.set({
				"X-RateLimit-Limit": options.maxRequests.toString(),
				"X-RateLimit-Remaining": Math.max(
					0,
					options.maxRequests - current,
				).toString(),
				"X-RateLimit-Reset": new Date(
					Date.now() + options.windowMs,
				).toISOString(),
			});

			if (current > options.maxRequests) {
				if (options.onLimitReached) {
					options.onLimitReached(req, res);
				}
				return res.status(429).json({
					error: "Too Many Requests",
					message: `Rate limit exceeded. Try again in ${Math.ceil(
						options.windowMs / 1000,
					)} seconds.`,
				});
			}

			next();
		} catch (error) {
			console.error("โŒ Rate limiting error:", error);
			next(); // Continue without rate limiting on error
		}
	};
}

// src/app.ts - Complete Express.js Application
import express from "express";
import cors from "cors";
import helmet from "helmet";
import compression from "compression";
import { createRedisService } from "./examples/real-world-scenarios";
import {
	createCacheMiddleware,
	createRateLimitMiddleware,
} from "./middleware/cache";
import {
	SessionManager,
	ShoppingCart,
	AnalyticsService,
} from "./examples/real-world-scenarios";

const app = express();

// Middleware
app.use(helmet());
app.use(cors());
app.use(compression());
app.use(express.json());

// Initialize Redis service
const redisService = createRedisService(
	(process.env.REDIS_TYPE as any) || "single",
);

// Initialize services
let sessionManager: SessionManager;
let cart: ShoppingCart;
let analytics: AnalyticsService;

// Cache middleware
const cacheMiddleware = createCacheMiddleware(redisService, {
	ttl: 600, // 10 minutes
	condition: (req) => req.url.startsWith("/api/public/"),
});

// Rate limiting middleware
const rateLimitMiddleware = createRateLimitMiddleware(redisService, {
	windowMs: 15 * 60 * 1000, // 15 minutes
	maxRequests: 100,
	keyGenerator: (req) => `rate_limit:${req.ip}`,
});

// Apply rate limiting globally
app.use(rateLimitMiddleware);

// Health check endpoint
app.get("/health", async (req, res) => {
	try {
		const ping = await redisService.ping();
		const status = redisService.getConnectionStatus();

		res.json({
			status: "healthy",
			redis: {
				ping,
				status,
				type: process.env.REDIS_TYPE || "single",
			},
			timestamp: new Date().toISOString(),
		});
	} catch (error) {
		res.status(500).json({
			status: "unhealthy",
			error: error instanceof Error ? error.message : "Unknown error",
		});
	}
});

// Public API endpoints (cached)
app.get("/api/public/stats", cacheMiddleware, async (req, res) => {
	try {
		const today = new Date().toISOString().split("T")[0];
		const stats = await analytics.getDailyStats(today);
		res.json(stats);
	} catch (error) {
		res.status(500).json({ error: "Failed to fetch stats" });
	}
});

// User session endpoints
app.post("/api/sessions", async (req, res) => {
	try {
		const { userId, sessionData } = req.body;
		const sessionId = await sessionManager.createSession(
			userId,
			sessionData,
		);
		res.json({ sessionId });
	} catch (error) {
		res.status(500).json({ error: "Failed to create session" });
	}
});

app.get("/api/sessions/:sessionId", async (req, res) => {
	try {
		const session = await sessionManager.getSession(req.params.sessionId);
		if (!session) {
			return res.status(404).json({ error: "Session not found" });
		}
		res.json(session);
	} catch (error) {
		res.status(500).json({ error: "Failed to fetch session" });
	}
});

// Shopping cart endpoints
app.post("/api/cart/:userId/items", async (req, res) => {
	try {
		const { productId, quantity, price } = req.body;
		await cart.addItem(req.params.userId, productId, quantity, price);
		res.json({ success: true });
	} catch (error) {
		res.status(500).json({ error: "Failed to add item to cart" });
	}
});

app.get("/api/cart/:userId", async (req, res) => {
	try {
		const userCart = await cart.getCart(req.params.userId);
		res.json(userCart);
	} catch (error) {
		res.status(500).json({ error: "Failed to fetch cart" });
	}
});

// Analytics endpoints
app.post("/api/analytics/pageview", async (req, res) => {
	try {
		const { pageUrl, userId } = req.body;
		await analytics.trackPageView(pageUrl, userId);
		res.json({ success: true });
	} catch (error) {
		res.status(500).json({ error: "Failed to track page view" });
	}
});

// Error handling middleware
app.use(
	(
		error: any,
		req: express.Request,
		res: express.Response,
		next: express.NextFunction,
	) => {
		console.error("โŒ Unhandled error:", error);
		res.status(500).json({
			error: "Internal Server Error",
			message:
				process.env.NODE_ENV === "development"
					? error.message
					: "Something went wrong",
		});
	},
);

// Initialize and start server
async function startServer() {
	try {
		console.log("๐Ÿš€ Initializing Redis connection...");
		await redisService.initialize();

		// Initialize services
		sessionManager = new SessionManager(redisService);
		cart = new ShoppingCart(redisService);
		analytics = new AnalyticsService(redisService);

		console.log("โœ… Services initialized");

		const PORT = process.env.PORT || 3000;
		app.listen(PORT, () => {
			console.log(`๐ŸŒ Server running on port ${PORT}`);
			console.log(`๐Ÿ”— Health check: http://localhost:${PORT}/health`);
		});
	} catch (error) {
		console.error("โŒ Failed to start server:", error);
		process.exit(1);
	}
}

// Graceful shutdown
process.on("SIGINT", async () => {
	console.log("๐Ÿ›‘ Shutting down gracefully...");
	await redisService.disconnect();
	process.exit(0);
});

process.on("SIGTERM", async () => {
	console.log("๐Ÿ›‘ Received SIGTERM, shutting down...");
	await redisService.disconnect();
	process.exit(0);
});

// Start the server
if (require.main === module) {
	startServer();
}

export { app, startServer };

๐Ÿ“Š Monitoring & Performance Optimization

๐ŸŽฏ Monitoring Strategy Overview

graph TB
    A[๐Ÿ“Š Monitoring Strategy] --> B[๐Ÿ” Health Checks]
    A --> C[๐Ÿ“ˆ Performance Metrics]
    A --> D[๐Ÿšจ Alerting]
    A --> E[๐Ÿ“‹ Logging]

    B --> B1[Connection Status]
    B --> B2[Memory Usage]
    B --> B3[Cluster Health]

    C --> C1[Operations/sec]
    C --> C2[Latency]
    C --> C3[Hit Rate]
    C --> C4[CPU Usage]

    D --> D1[Threshold Alerts]
    D --> D2[Anomaly Detection]
    D --> D3[Predictive Alerts]

    E --> E1[Application Logs]
    E --> E2[Redis Logs]
    E --> E3[System Logs]

    style A fill:#e1f5fe
    style B fill:#e8f5e8
    style C fill:#fff3e0
    style D fill:#ffebee
    style E fill:#f3e5f5
Loading

๐Ÿ” Advanced Health Monitoring

// src/monitoring/HealthMonitor.ts
import { UniversalRedisService } from "../services/UniversalRedisService";
import { EventEmitter } from "events";

export interface HealthMetrics {
	timestamp: number;
	connectionStatus: string;
	memory: {
		used: number;
		max: number;
		percentage: number;
	};
	performance: {
		operationsPerSecond: number;
		avgLatency: number;
		hitRate: number;
	};
	cluster?: {
		state: string;
		connectedNodes: number;
		totalNodes: number;
	};
	system: {
		cpuUsage: number;
		freeMemory: number;
	};
}

export interface AlertRule {
	name: string;
	condition: (metrics: HealthMetrics) => boolean;
	severity: "low" | "medium" | "high" | "critical";
	message: (metrics: HealthMetrics) => string;
	cooldown: number; // seconds
}

export class HealthMonitor extends EventEmitter {
	private redisService: UniversalRedisService;
	private metricsHistory: HealthMetrics[] = [];
	private alertRules: AlertRule[] = [];
	private lastAlerts: Map<string, number> = new Map();
	private monitoringInterval?: NodeJS.Timeout;

	constructor(redisService: UniversalRedisService) {
		super();
		this.redisService = redisService;
		this.setupDefaultAlertRules();
	}

	private setupDefaultAlertRules(): void {
		this.alertRules = [
			{
				name: "high_memory_usage",
				condition: (metrics) => metrics.memory.percentage > 85,
				severity: "high",
				message: (metrics) =>
					`Memory usage critical: ${metrics.memory.percentage.toFixed(
						1,
					)}%`,
				cooldown: 300, // 5 minutes
			},
			{
				name: "connection_lost",
				condition: (metrics) => metrics.connectionStatus !== "ready",
				severity: "critical",
				message: (metrics) =>
					`Redis connection lost: ${metrics.connectionStatus}`,
				cooldown: 60, // 1 minute
			},
			{
				name: "high_latency",
				condition: (metrics) => metrics.performance.avgLatency > 10,
				severity: "medium",
				message: (metrics) =>
					`High latency detected: ${metrics.performance.avgLatency}ms`,
				cooldown: 300,
			},
			{
				name: "low_hit_rate",
				condition: (metrics) => metrics.performance.hitRate < 80,
				severity: "medium",
				message: (metrics) =>
					`Low cache hit rate: ${metrics.performance.hitRate.toFixed(
						1,
					)}%`,
				cooldown: 600, // 10 minutes
			},
			{
				name: "cluster_degraded",
				condition: (metrics) =>
					metrics.cluster &&
					metrics.cluster.connectedNodes < metrics.cluster.totalNodes,
				severity: "high",
				message: (metrics) =>
					`Cluster degraded: ${metrics.cluster?.connectedNodes}/${metrics.cluster?.totalNodes} nodes`,
				cooldown: 180, // 3 minutes
			},
		];
	}

	async collectMetrics(): Promise<HealthMetrics> {
		const start = Date.now();

		try {
			// Basic connection test
			await this.redisService.ping();
			const connectionStatus = this.redisService.getConnectionStatus();

			// Memory information
			const memoryInfo = await this.redisService.info("memory");
			const memoryLines = memoryInfo.split("\r\n");

			const usedMemory = this.parseInfoValue(memoryLines, "used_memory");
			const maxMemory =
				this.parseInfoValue(memoryLines, "maxmemory") ||
				32 * 1024 * 1024 * 1024; // Default 32GB

			// Performance metrics
			const statsInfo = await this.redisService.info("stats");
			const statsLines = statsInfo.split("\r\n");

			const totalConnections =
				this.parseInfoValue(statsLines, "total_connections_received") ||
				0;
			const totalCommands =
				this.parseInfoValue(statsLines, "total_commands_processed") ||
				0;
			const cacheHits =
				this.parseInfoValue(statsLines, "keyspace_hits") || 0;
			const cacheMisses =
				this.parseInfoValue(statsLines, "keyspace_misses") || 0;

			const latency = Date.now() - start;
			const hitRate =
				cacheHits + cacheMisses > 0
					? (cacheHits / (cacheHits + cacheMisses)) * 100
					: 100;

			// Calculate operations per second (approximation)
			const previousMetrics =
				this.metricsHistory[this.metricsHistory.length - 1];
			let operationsPerSecond = 0;

			if (previousMetrics) {
				const timeDiff =
					(Date.now() - previousMetrics.timestamp) / 1000;
				const commandsDiff =
					totalCommands - (previousMetrics as any).totalCommands;
				operationsPerSecond =
					timeDiff > 0 ? commandsDiff / timeDiff : 0;
			}

			// Cluster information (if applicable)
			let clusterInfo;
			try {
				const clusterState = await this.redisService.clusterInfo();
				const clusterNodes = await this.redisService.clusterNodes();

				clusterInfo = {
					state: this.parseClusterState(clusterState),
					connectedNodes: this.countConnectedNodes(clusterNodes),
					totalNodes: this.countTotalNodes(clusterNodes),
				};
			} catch (error) {
				// Single instance - no cluster info
			}

			// System metrics (simplified)
			const systemInfo = {
				cpuUsage: 0, // Would need system monitoring
				freeMemory: process.memoryUsage().heapUsed,
			};

			const metrics: HealthMetrics = {
				timestamp: Date.now(),
				connectionStatus,
				memory: {
					used: usedMemory,
					max: maxMemory,
					percentage: (usedMemory / maxMemory) * 100,
				},
				performance: {
					operationsPerSecond,
					avgLatency: latency,
					hitRate,
				},
				cluster: clusterInfo,
				system: systemInfo,
			};

			// Store for historical analysis
			(metrics as any).totalCommands = totalCommands;

			return metrics;
		} catch (error) {
			console.error("โŒ Error collecting metrics:", error);
			throw error;
		}
	}

	private parseInfoValue(lines: string[], key: string): number {
		const line = lines.find((l) => l.startsWith(`${key}:`));
		return line ? parseInt(line.split(":")[1]) : 0;
	}

	private parseClusterState(clusterInfo: string): string {
		const stateLine = clusterInfo
			.split("\r\n")
			.find((l) => l.startsWith("cluster_state:"));
		return stateLine ? stateLine.split(":")[1] : "unknown";
	}

	private countConnectedNodes(nodesInfo: string): number {
		return nodesInfo
			.split("\n")
			.filter(
				(line) =>
					line.includes("connected") &&
					!line.includes("disconnected"),
			).length;
	}

	private countTotalNodes(nodesInfo: string): number {
		return nodesInfo.split("\n").filter((line) => line.trim()).length;
	}

	startMonitoring(intervalMs: number = 30000): void {
		console.log(
			`๐Ÿ” Starting health monitoring (interval: ${intervalMs}ms)`,
		);

		this.monitoringInterval = setInterval(async () => {
			try {
				const metrics = await this.collectMetrics();
				this.metricsHistory.push(metrics);

				// Keep only last 100 metrics (for memory efficiency)
				if (this.metricsHistory.length > 100) {
					this.metricsHistory.shift();
				}

				// Check alert rules
				this.checkAlertRules(metrics);

				// Emit metrics event
				this.emit("metrics", metrics);

				console.log(
					`๐Ÿ“Š Health check: ${
						metrics.connectionStatus
					}, Memory: ${metrics.memory.percentage.toFixed(
						1,
					)}%, Latency: ${metrics.performance.avgLatency}ms`,
				);
			} catch (error) {
				console.error("โŒ Health monitoring error:", error);
				this.emit("error", error);
			}
		}, intervalMs);
	}

	stopMonitoring(): void {
		if (this.monitoringInterval) {
			clearInterval(this.monitoringInterval);
			this.monitoringInterval = undefined;
			console.log("๐Ÿ›‘ Health monitoring stopped");
		}
	}

	private checkAlertRules(metrics: HealthMetrics): void {
		const now = Date.now();

		this.alertRules.forEach((rule) => {
			if (rule.condition(metrics)) {
				const lastAlert = this.lastAlerts.get(rule.name) || 0;
				const cooldownMs = rule.cooldown * 1000;

				if (now - lastAlert > cooldownMs) {
					const alert = {
						rule: rule.name,
						severity: rule.severity,
						message: rule.message(metrics),
						timestamp: now,
						metrics,
					};

					this.emit("alert", alert);
					this.lastAlerts.set(rule.name, now);

					console.warn(
						`๐Ÿšจ ALERT [${rule.severity.toUpperCase()}]: ${
							alert.message
						}`,
					);
				}
			}
		});
	}

	getMetricsHistory(): HealthMetrics[] {
		return [...this.metricsHistory];
	}

	getLatestMetrics(): HealthMetrics | null {
		return this.metricsHistory.length > 0
			? this.metricsHistory[this.metricsHistory.length - 1]
			: null;
	}

	addAlertRule(rule: AlertRule): void {
		this.alertRules.push(rule);
	}

	removeAlertRule(name: string): void {
		this.alertRules = this.alertRules.filter((rule) => rule.name !== name);
	}

	// Performance analysis methods
	calculateTrends(minutes: number = 30): {
		memoryTrend: "increasing" | "decreasing" | "stable";
		latencyTrend: "increasing" | "decreasing" | "stable";
		operationsTrend: "increasing" | "decreasing" | "stable";
	} {
		const cutoff = Date.now() - minutes * 60 * 1000;
		const recentMetrics = this.metricsHistory.filter(
			(m) => m.timestamp > cutoff,
		);

		if (recentMetrics.length < 2) {
			return {
				memoryTrend: "stable",
				latencyTrend: "stable",
				operationsTrend: "stable",
			};
		}

		const first = recentMetrics[0];
		const last = recentMetrics[recentMetrics.length - 1];

		return {
			memoryTrend: this.calculateTrend(
				first.memory.percentage,
				last.memory.percentage,
			),
			latencyTrend: this.calculateTrend(
				first.performance.avgLatency,
				last.performance.avgLatency,
			),
			operationsTrend: this.calculateTrend(
				first.performance.operationsPerSecond,
				last.performance.operationsPerSecond,
				true,
			),
		};
	}

	private calculateTrend(
		first: number,
		last: number,
		inverse: boolean = false,
	): "increasing" | "decreasing" | "stable" {
		const threshold = 0.05; // 5% change threshold
		const change = (last - first) / first;

		if (Math.abs(change) < threshold) return "stable";

		const isIncreasing = change > 0;
		if (inverse) {
			return isIncreasing ? "increasing" : "decreasing";
		} else {
			return isIncreasing ? "increasing" : "decreasing";
		}
	}
}

๐Ÿ“ˆ Performance Dashboard

// src/monitoring/PerformanceDashboard.ts
import { HealthMonitor, HealthMetrics } from "./HealthMonitor";
import express from "express";

export class PerformanceDashboard {
	private healthMonitor: HealthMonitor;
	private app: express.Application;

	constructor(healthMonitor: HealthMonitor) {
		this.healthMonitor = healthMonitor;
		this.app = express();
		this.setupRoutes();
	}

	private setupRoutes(): void {
		this.app.use(express.static("public")); // Serve static files

		// Real-time metrics endpoint
		this.app.get("/api/metrics/current", (req, res) => {
			const metrics = this.healthMonitor.getLatestMetrics();
			res.json(metrics);
		});

		// Historical metrics endpoint
		this.app.get("/api/metrics/history", (req, res) => {
			const minutes = parseInt(req.query.minutes as string) || 30;
			const cutoff = Date.now() - minutes * 60 * 1000;

			const history = this.healthMonitor
				.getMetricsHistory()
				.filter((m) => m.timestamp > cutoff);

			res.json(history);
		});

		// Performance trends endpoint
		this.app.get("/api/metrics/trends", (req, res) => {
			const minutes = parseInt(req.query.minutes as string) || 30;
			const trends = this.healthMonitor.calculateTrends(minutes);
			res.json(trends);
		});

		// Health summary endpoint
		this.app.get("/api/health/summary", (req, res) => {
			const latest = this.healthMonitor.getLatestMetrics();
			if (!latest) {
				return res.status(503).json({ error: "No metrics available" });
			}

			const summary = {
				status: this.getOverallStatus(latest),
				uptime: process.uptime(),
				metrics: {
					memory: `${latest.memory.percentage.toFixed(1)}%`,
					latency: `${latest.performance.avgLatency}ms`,
					hitRate: `${latest.performance.hitRate.toFixed(1)}%`,
					operations: `${latest.performance.operationsPerSecond.toFixed(
						0,
					)}/sec`,
				},
				cluster: latest.cluster
					? {
							state: latest.cluster.state,
							nodes: `${latest.cluster.connectedNodes}/${latest.cluster.totalNodes}`,
					  }
					: null,
			};

			res.json(summary);
		});

		// Detailed HTML dashboard
		this.app.get("/dashboard", (req, res) => {
			res.send(this.generateDashboardHTML());
		});
	}

	private getOverallStatus(
		metrics: HealthMetrics,
	): "healthy" | "warning" | "critical" {
		if (metrics.connectionStatus !== "ready") return "critical";
		if (metrics.memory.percentage > 90) return "critical";
		if (metrics.performance.avgLatency > 100) return "warning";
		if (metrics.performance.hitRate < 70) return "warning";
		if (metrics.cluster && metrics.cluster.state !== "ok") return "warning";

		return "healthy";
	}

	private generateDashboardHTML(): string {
		return `
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>๐Ÿš€ Redis Performance Dashboard</title>
    <script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
    <style>
        body {
            font-family: 'Segoe UI', system-ui, sans-serif;
            margin: 0;
            padding: 20px;
            background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
            color: #333;
        }
        .container {
            max-width: 1200px;
            margin: 0 auto;
            background: white;
            border-radius: 12px;
            padding: 30px;
            box-shadow: 0 20px 40px rgba(0,0,0,0.1);
        }
        .header {
            text-align: center;
            margin-bottom: 40px;
            border-bottom: 3px solid #667eea;
            padding-bottom: 20px;
        }
        .header h1 {
            color: #667eea;
            margin: 0;
            font-size: 2.5em;
        }
        .metrics-grid {
            display: grid;
            grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
            gap: 20px;
            margin-bottom: 40px;
        }
        .metric-card {
            background: linear-gradient(135deg, #f093fb 0%, #f5576c 100%);
            color: white;
            padding: 25px;
            border-radius: 12px;
            text-align: center;
            box-shadow: 0 8px 16px rgba(0,0,0,0.1);
            transition: transform 0.3s ease;
        }
        .metric-card:hover {
            transform: translateY(-5px);
        }
        .metric-value {
            font-size: 2.5em;
            font-weight: bold;
            margin-bottom: 10px;
        }
        .metric-label {
            font-size: 1.1em;
            opacity: 0.9;
        }
        .chart-container {
            background: #f8f9fa;
            padding: 25px;
            border-radius: 12px;
            margin-bottom: 30px;
            box-shadow: 0 4px 8px rgba(0,0,0,0.05);
        }
        .status-indicator {
            display: inline-block;
            width: 12px;
            height: 12px;
            border-radius: 50%;
            margin-right: 8px;
        }
        .status-healthy { background: #28a745; }
        .status-warning { background: #ffc107; }
        .status-critical { background: #dc3545; }
        .refresh-btn {
            background: #667eea;
            color: white;
            border: none;
            padding: 12px 24px;
            border-radius: 8px;
            cursor: pointer;
            font-size: 16px;
            transition: background 0.3s ease;
        }
        .refresh-btn:hover {
            background: #5a6fd8;
        }
    </style>
</head>
<body>
    <div class="container">
        <div class="header">
            <h1>๐Ÿš€ Redis Performance Dashboard</h1>
            <button class="refresh-btn" onclick="refreshData()">๐Ÿ”„ Refresh</button>
            <div id="status" style="margin-top: 15px; font-size: 1.2em;">
                <span class="status-indicator status-healthy"></span>
                Loading...
            </div>
        </div>

        <div class="metrics-grid" id="metricsGrid">
            <!-- Metrics will be populated by JavaScript -->
        </div>

        <div class="chart-container">
            <h3>๐Ÿ“Š Memory Usage Over Time</h3>
            <canvas id="memoryChart" height="100"></canvas>
        </div>

        <div class="chart-container">
            <h3>โšก Latency and Operations/sec</h3>
            <canvas id="performanceChart" height="100"></canvas>
        </div>
    </div>

    <script>
        let memoryChart, performanceChart;

        async function fetchMetrics() {
            try {
                const [current, history] = await Promise.all([
                    fetch('/api/metrics/current').then(r => r.json()),
                    fetch('/api/metrics/history?minutes=60').then(r => r.json())
                ]);
                
                updateMetricsDisplay(current);
                updateCharts(history);
                
                document.getElementById('status').innerHTML = 
                    \`<span class="status-indicator status-healthy"></span>
                     Last updated: \${new Date().toLocaleTimeString()}\`;
            } catch (error) {
                console.error('Error fetching metrics:', error);
                document.getElementById('status').innerHTML = 
                    \`<span class="status-indicator status-critical"></span>
                     Error loading data\`;
            }
        }

        function updateMetricsDisplay(metrics) {
            const grid = document.getElementById('metricsGrid');
            grid.innerHTML = \`
                <div class="metric-card">
                    <div class="metric-value">\${metrics.memory.percentage.toFixed(1)}%</div>
                    <div class="metric-label">๐Ÿ’พ Memory Usage</div>
                </div>
                <div class="metric-card">
                    <div class="metric-value">\${metrics.performance.avgLatency}ms</div>
                    <div class="metric-label">โšก Latency</div>
                </div>
                <div class="metric-card">
                    <div class="metric-value">\${metrics.performance.hitRate.toFixed(1)}%</div>
                    <div class="metric-label">๐ŸŽฏ Hit Rate</div>
                </div>
                <div class="metric-card">
                    <div class="metric-value">\${metrics.performance.operationsPerSecond.toFixed(0)}</div>
                    <div class="metric-label">๐Ÿš€ Ops/sec</div>
                </div>
            \`;
        }

        function updateCharts(history) {
            const labels = history.map(h => new Date(h.timestamp).toLocaleTimeString());
            const memoryData = history.map(h => h.memory.percentage);
            const latencyData = history.map(h => h.performance.avgLatency);
            const opsData = history.map(h => h.performance.operationsPerSecond);

            // Memory Chart
            if (memoryChart) memoryChart.destroy();
            memoryChart = new Chart(document.getElementById('memoryChart'), {
                type: 'line',
                data: {
                    labels: labels,
                    datasets: [{
                        label: 'Memory Usage (%)',
                        data: memoryData,
                        borderColor: '#f5576c',
                        backgroundColor: 'rgba(245, 87, 108, 0.1)',
                        tension: 0.4
                    }]
                },
                options: {
                    responsive: true,
                    scales: {
                        y: { beginAtZero: true, max: 100 }
                    }
                }
            });

            // Performance Chart
            if (performanceChart) performanceChart.destroy();
            performanceChart = new Chart(document.getElementById('performanceChart'), {
                type: 'line',
                data: {
                    labels: labels,
                    datasets: [{
                        label: 'Latency (ms)',
                        data: latencyData,
                        borderColor: '#667eea',
                        backgroundColor: 'rgba(102, 126, 234, 0.1)',
                        yAxisID: 'y'
                    }, {
                        label: 'Operations/sec',
                        data: opsData,
                        borderColor: '#28a745',
                        backgroundColor: 'rgba(40, 167, 69, 0.1)',
                        yAxisID: 'y1'
                    }]
                },
                options: {
                    responsive: true,
                    scales: {
                        y: { type: 'linear', position: 'left' },
                        y1: { type: 'linear', position: 'right', grid: { drawOnChartArea: false } }
                    }
                }
            });
        }

        function refreshData() {
            fetchMetrics();
        }

        // Initial load and auto-refresh
        fetchMetrics();
        setInterval(fetchMetrics, 30000); // Refresh every 30 seconds
    </script>
</body>
</html>
    `;
	}

	start(port: number = 3001): void {
		this.app.listen(port, () => {
			console.log(
				`๐Ÿ“Š Performance dashboard available at: http://localhost:${port}/dashboard`,
			);
		});
	}
}

๐Ÿšจ Alerting System

// src/monitoring/AlertingSystem.ts
import { EventEmitter } from "events";
import { HealthMonitor } from "./HealthMonitor";

export interface AlertChannel {
	name: string;
	send(alert: Alert): Promise<void>;
}

export interface Alert {
	id: string;
	rule: string;
	severity: "low" | "medium" | "high" | "critical";
	message: string;
	timestamp: number;
	acknowledged: boolean;
	resolvedAt?: number;
}

export class AlertingSystem extends EventEmitter {
	private alerts: Map<string, Alert> = new Map();
	private channels: AlertChannel[] = [];
	private healthMonitor: HealthMonitor;

	constructor(healthMonitor: HealthMonitor) {
		super();
		this.healthMonitor = healthMonitor;
		this.setupEventListeners();
	}

	private setupEventListeners(): void {
		this.healthMonitor.on("alert", async (alertData) => {
			const alert: Alert = {
				id: `${alertData.rule}_${alertData.timestamp}`,
				rule: alertData.rule,
				severity: alertData.severity,
				message: alertData.message,
				timestamp: alertData.timestamp,
				acknowledged: false,
			};

			this.alerts.set(alert.id, alert);

			// Send through all channels
			await this.sendAlert(alert);

			this.emit("alert_created", alert);
		});
	}

	addChannel(channel: AlertChannel): void {
		this.channels.push(channel);
		console.log(`๐Ÿ“ข Added alert channel: ${channel.name}`);
	}

	private async sendAlert(alert: Alert): Promise<void> {
		const promises = this.channels.map(async (channel) => {
			try {
				await channel.send(alert);
				console.log(
					`โœ… Alert sent via ${channel.name}: ${alert.message}`,
				);
			} catch (error) {
				console.error(
					`โŒ Failed to send alert via ${channel.name}:`,
					error,
				);
			}
		});

		await Promise.allSettled(promises);
	}

	acknowledgeAlert(alertId: string): boolean {
		const alert = this.alerts.get(alertId);
		if (alert) {
			alert.acknowledged = true;
			this.emit("alert_acknowledged", alert);
			return true;
		}
		return false;
	}

	resolveAlert(alertId: string): boolean {
		const alert = this.alerts.get(alertId);
		if (alert) {
			alert.resolvedAt = Date.now();
			this.emit("alert_resolved", alert);
			return true;
		}
		return false;
	}

	getActiveAlerts(): Alert[] {
		return Array.from(this.alerts.values())
			.filter((alert) => !alert.resolvedAt)
			.sort((a, b) => b.timestamp - a.timestamp);
	}

	getAlertHistory(hours: number = 24): Alert[] {
		const cutoff = Date.now() - hours * 60 * 60 * 1000;
		return Array.from(this.alerts.values())
			.filter((alert) => alert.timestamp > cutoff)
			.sort((a, b) => b.timestamp - a.timestamp);
	}
}

// Console Alert Channel
export class ConsoleAlertChannel implements AlertChannel {
	name = "console";

	async send(alert: Alert): Promise<void> {
		const timestamp = new Date(alert.timestamp).toISOString();
		const emoji = this.getSeverityEmoji(alert.severity);

		console.log(
			`๐Ÿšจ ${emoji} [${alert.severity.toUpperCase()}] ${timestamp}: ${
				alert.message
			}`,
		);
	}

	private getSeverityEmoji(severity: string): string {
		switch (severity) {
			case "critical":
				return "๐Ÿ”ฅ";
			case "high":
				return "โš ๏ธ";
			case "medium":
				return "๐ŸŸก";
			case "low":
				return "๐Ÿ”ต";
			default:
				return "๐Ÿ“ข";
		}
	}
}

// Webhook Alert Channel
export class WebhookAlertChannel implements AlertChannel {
	name = "webhook";
	private webhookUrl: string;

	constructor(webhookUrl: string) {
		this.webhookUrl = webhookUrl;
	}

	async send(alert: Alert): Promise<void> {
		const payload = {
			alert_id: alert.id,
			rule: alert.rule,
			severity: alert.severity,
			message: alert.message,
			timestamp: alert.timestamp,
			service: "redis-monitor",
		};

		const response = await fetch(this.webhookUrl, {
			method: "POST",
			headers: { "Content-Type": "application/json" },
			body: JSON.stringify(payload),
		});

		if (!response.ok) {
			throw new Error(
				`Webhook failed: ${response.status} ${response.statusText}`,
			);
		}
	}
}

// Email Alert Channel (using a simple SMTP service)
export class EmailAlertChannel implements AlertChannel {
	name = "email";
	private recipients: string[];
	private smtpConfig: any;

	constructor(recipients: string[], smtpConfig: any) {
		this.recipients = recipients;
		this.smtpConfig = smtpConfig;
	}

	async send(alert: Alert): Promise<void> {
		// Implementation would depend on your email service
		// This is a placeholder for the interface
		console.log(
			`๐Ÿ“ง Would send email alert to ${this.recipients.join(", ")}: ${
				alert.message
			}`,
		);
	}
}

Monitoring and Maintenance

Monitoring Script

# Create monitoring script
sudo tee /usr/local/bin/redis-monitor.sh << 'EOF'
#!/bin/bash

# Redis cluster monitoring script
LOG_FILE="/var/log/redis/cluster-monitor.log"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')

echo "[$TIMESTAMP] Starting Redis cluster health check" >> $LOG_FILE

# Check each node
for port in {7000..7005}; do
    if redis-cli -p $port -a your-strong-password ping > /dev/null 2>&1; then
        echo "[$TIMESTAMP] Node $port: OK" >> $LOG_FILE
    else
        echo "[$TIMESTAMP] Node $port: FAILED" >> $LOG_FILE
        # Send alert (integrate with your monitoring system)
        # curl -X POST "https://your-webhook-url" -d "Redis node $port is down"
    fi
done

# Check cluster status
CLUSTER_STATE=$(redis-cli -p 7000 -a your-strong-password cluster info | grep cluster_state | cut -d: -f2 | tr -d '\r')

if [ "$CLUSTER_STATE" = "ok" ]; then
    echo "[$TIMESTAMP] Cluster state: OK" >> $LOG_FILE
else
    echo "[$TIMESTAMP] Cluster state: $CLUSTER_STATE" >> $LOG_FILE
    # Send alert
fi

echo "[$TIMESTAMP] Health check completed" >> $LOG_FILE
EOF

sudo chmod +x /usr/local/bin/redis-monitor.sh

# Add to crontab
echo "*/5 * * * * /usr/local/bin/redis-monitor.sh" | sudo crontab -

Performance Monitoring

// src/monitoring/performance.ts
import { redisService } from "../services/RedisService";

class PerformanceMonitor {
	private metrics: Map<string, number[]> = new Map();

	async measureOperation<T>(
		operation: () => Promise<T>,
		operationName: string,
	): Promise<T> {
		const start = Date.now();

		try {
			const result = await operation();
			const duration = Date.now() - start;

			this.recordMetric(operationName, duration);

			return result;
		} catch (error) {
			const duration = Date.now() - start;
			this.recordMetric(`${operationName}_error`, duration);
			throw error;
		}
	}

	private recordMetric(operationName: string, duration: number): void {
		if (!this.metrics.has(operationName)) {
			this.metrics.set(operationName, []);
		}

		const metrics = this.metrics.get(operationName)!;
		metrics.push(duration);

		// Keep only last 100 measurements
		if (metrics.length > 100) {
			metrics.shift();
		}
	}

	getMetrics(operationName: string): {
		avg: number;
		min: number;
		max: number;
		count: number;
	} {
		const metrics = this.metrics.get(operationName) || [];

		if (metrics.length === 0) {
			return { avg: 0, min: 0, max: 0, count: 0 };
		}

		const sum = metrics.reduce((a, b) => a + b, 0);
		const avg = sum / metrics.length;
		const min = Math.min(...metrics);
		const max = Math.max(...metrics);

		return { avg, min, max, count: metrics.length };
	}

	getAllMetrics(): Record<string, any> {
		const result: Record<string, any> = {};

		for (const [operationName] of this.metrics) {
			result[operationName] = this.getMetrics(operationName);
		}

		return result;
	}
}

export const performanceMonitor = new PerformanceMonitor();

// Usage example
async function example() {
	const result = await performanceMonitor.measureOperation(
		() => redisService.get("user:1"),
		"get_user",
	);

	console.log("Metrics:", performanceMonitor.getMetrics("get_user"));
}

Security Best Practices

Network Security

# Security group configuration
aws ec2 create-security-group \
  --group-name redis-cluster-sg \
  --description "Redis cluster security group"

# Allow Redis cluster communication
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxxx \
  --protocol tcp \
  --port 7000-7005 \
  --source-group sg-xxxxxxxxx

# Allow cluster bus communication
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxxx \
  --protocol tcp \
  --port 17000-17005 \
  --source-group sg-xxxxxxxxx

# Allow Redis Insight (only from specific IPs)
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxxx \
  --protocol tcp \
  --port 8001 \
  --cidr 10.0.0.0/8

# Allow SSH (from bastion host only)
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxxx \
  --protocol tcp \
  --port 22 \
  --source-group sg-bastion

Authentication and Authorization

# Generate strong password
REDIS_PASSWORD=$(openssl rand -base64 32)
echo "Redis password: $REDIS_PASSWORD"

# Create auth configuration
sudo tee /etc/redis/auth.conf << EOF
# Authentication
requirepass $REDIS_PASSWORD
masterauth $REDIS_PASSWORD

# Rename dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command KEYS ""
rename-command CONFIG "CONFIG_$REDIS_PASSWORD"
rename-command SHUTDOWN "SHUTDOWN_$REDIS_PASSWORD"
rename-command DEBUG ""
rename-command EVAL ""
rename-command SCRIPT ""
EOF

# Include auth config in main config
echo "include /etc/redis/auth.conf" >> /etc/redis/cluster/redis-7000.conf

TLS/SSL Configuration

# Generate SSL certificates
sudo mkdir -p /etc/redis/ssl
cd /etc/redis/ssl

# Create CA private key
sudo openssl genrsa -out ca-key.pem 4096

# Create CA certificate
sudo openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=US/ST=CA/L=San Francisco/O=MyOrg/CN=Redis CA"

# Create server private key
sudo openssl genrsa -out redis-server-key.pem 4096

# Create server certificate signing request
sudo openssl req -subj "/C=US/ST=CA/L=San Francisco/O=MyOrg/CN=redis-server" -new -key redis-server-key.pem -out server.csr

# Create server certificate
sudo openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -out redis-server-cert.pem -CAcreateserial

# Set permissions
sudo chown -R redis:redis /etc/redis/ssl
sudo chmod 600 /etc/redis/ssl/*-key.pem

TLS Redis Configuration

# Add TLS configuration to Redis
sudo tee -a /etc/redis/cluster/redis-7000.conf << 'EOF'
# TLS Configuration
tls-port 7000
port 0
tls-cert-file /etc/redis/ssl/redis-server-cert.pem
tls-key-file /etc/redis/ssl/redis-server-key.pem
tls-ca-cert-file /etc/redis/ssl/ca.pem
tls-dh-params-file /etc/redis/ssl/redis.dh
tls-protocols "TLSv1.2 TLSv1.3"
tls-ciphersuites TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
EOF

# Generate DH parameters
sudo openssl dhparam -out /etc/redis/ssl/redis.dh 2048
sudo chown redis:redis /etc/redis/ssl/redis.dh

Troubleshooting

Common Issues and Solutions

1. Cluster Formation Issues

# Check cluster status
redis-cli -p 7000 -a your-password cluster info
redis-cli -p 7000 -a your-password cluster nodes

# Fix split-brain scenario
redis-cli -p 7000 -a your-password cluster reset hard
# Recreate cluster

# Check network connectivity
for port in {7000..7005}; do
    echo "Testing port $port..."
    nc -zv <node-ip> $port
done

2. Memory Issues

# Check memory usage
redis-cli -p 7000 -a your-password info memory

# Monitor memory in real-time
watch -n 1 'redis-cli -p 7000 -a your-password info memory | grep used_memory_human'

# Check for memory leaks
redis-cli -p 7000 -a your-password memory usage <key>

3. Performance Issues

# Monitor slow queries
redis-cli -p 7000 -a your-password slowlog get 10

# Check latency
redis-cli -p 7000 -a your-password --latency-history

# Monitor operations per second
redis-cli -p 7000 -a your-password --stat

4. Connection Issues

// Connection retry logic
import { redisService } from "../services/RedisService";

class ConnectionManager {
	private reconnectAttempts = 0;
	private maxReconnectAttempts = 5;
	private reconnectDelay = 1000;

	async handleConnectionError(error: Error): Promise<void> {
		console.error("Redis connection error:", error);

		if (this.reconnectAttempts < this.maxReconnectAttempts) {
			this.reconnectAttempts++;
			console.log(
				`Attempting to reconnect... (${this.reconnectAttempts}/${this.maxReconnectAttempts})`,
			);

			await new Promise((resolve) =>
				setTimeout(resolve, this.reconnectDelay),
			);

			try {
				await redisService.ping();
				console.log("Reconnected successfully");
				this.reconnectAttempts = 0;
			} catch (retryError) {
				await this.handleConnectionError(retryError as Error);
			}
		} else {
			console.error("Max reconnection attempts reached");
			throw error;
		}
	}
}

Debugging Tools

# Create debugging script
sudo tee /usr/local/bin/redis-debug.sh << 'EOF'
#!/bin/bash

echo "=== Redis Cluster Debug Information ==="
echo "Date: $(date)"
echo

echo "=== System Information ==="
echo "Memory usage:"
free -h
echo
echo "Disk usage:"
df -h
echo
echo "CPU usage:"
top -bn1 | grep "Cpu(s)"
echo

echo "=== Redis Process Information ==="
ps aux | grep redis
echo

echo "=== Redis Configuration ==="
for port in {7000..7005}; do
    echo "--- Node $port ---"
    redis-cli -p $port -a your-password config get maxmemory
    redis-cli -p $port -a your-password config get maxmemory-policy
done
echo

echo "=== Cluster Information ==="
redis-cli -p 7000 -a your-password cluster info
echo
redis-cli -p 7000 -a your-password cluster nodes
echo

echo "=== Memory Usage ==="
for port in {7000..7005}; do
    echo "--- Node $port ---"
    redis-cli -p $port -a your-password info memory | grep used_memory_human
done
echo

echo "=== Slow Log ==="
redis-cli -p 7000 -a your-password slowlog get 5
echo

echo "=== Network Statistics ==="
netstat -tuln | grep -E ':700[0-5]|:1700[0-5]'
echo

echo "=== Log Files ==="
echo "Recent Redis logs:"
tail -20 /var/log/redis/cluster/redis-7000.log
EOF

sudo chmod +x /usr/local/bin/redis-debug.sh

Log Analysis

# Create log analysis script
sudo tee /usr/local/bin/redis-log-analyzer.sh << 'EOF'
#!/bin/bash

LOG_DIR="/var/log/redis/cluster"
ANALYSIS_FILE="/tmp/redis-analysis.txt"

echo "Redis Log Analysis - $(date)" > $ANALYSIS_FILE
echo "=================================" >> $ANALYSIS_FILE

# Analyze error patterns
echo "ERROR PATTERNS:" >> $ANALYSIS_FILE
grep -h "ERROR\|WARNING\|CRITICAL" $LOG_DIR/*.log | sort | uniq -c | sort -nr >> $ANALYSIS_FILE

echo -e "\nCONNECTION ISSUES:" >> $ANALYSIS_FILE
grep -h "Connection refused\|Connection reset\|Connection timeout" $LOG_DIR/*.log | wc -l >> $ANALYSIS_FILE

echo -e "\nMEMORY ISSUES:" >> $ANALYSIS_FILE
grep -h "OOM\|memory\|evict" $LOG_DIR/*.log | tail -10 >> $ANALYSIS_FILE

echo -e "\nCLUSTER ISSUES:" >> $ANALYSIS_FILE
grep -h "cluster\|failover\|replica" $LOG_DIR/*.log | tail -10 >> $ANALYSIS_FILE

cat $ANALYSIS_FILE
EOF

sudo chmod +x /usr/local/bin/redis-log-analyzer.sh

Advanced Configuration

Custom Redis Modules

# Install custom modules
cd /tmp
git clone https://github.com/RedisLabsModules/RedisTimeSeries.git
cd RedisTimeSeries
make
sudo cp redismodule.so /opt/redis-stack/lib/redistimeseries-custom.so

# Load custom module
echo "loadmodule /opt/redis-stack/lib/redistimeseries-custom.so" >> /etc/redis/cluster/redis-7000.conf

Backup and Recovery

# Create backup script
sudo tee /usr/local/bin/redis-backup.sh << 'EOF'
#!/bin/bash

BACKUP_DIR="/var/backups/redis"
DATE=$(date +%Y%m%d_%H%M%S)
S3_BUCKET="your-redis-backups"

# Create backup directory
mkdir -p $BACKUP_DIR

# Backup each node
for port in {7000..7005}; do
    echo "Backing up node $port..."

    # Create consistent backup
    redis-cli -p $port -a your-password BGSAVE

    # Wait for backup to complete
    while [ $(redis-cli -p $port -a your-password LASTSAVE) -eq $(redis-cli -p $port -a your-password LASTSAVE) ]; do
        sleep 1
    done

    # Copy RDB file
    cp /var/lib/redis/cluster/dump-$port.rdb $BACKUP_DIR/dump-$port-$DATE.rdb

    # Compress and upload to S3
    gzip $BACKUP_DIR/dump-$port-$DATE.rdb
    aws s3 cp $BACKUP_DIR/dump-$port-$DATE.rdb.gz s3://$S3_BUCKET/
done

# Clean up old backups (keep last 7 days)
find $BACKUP_DIR -name "*.rdb.gz" -mtime +7 -delete

echo "Backup completed: $DATE"
EOF

sudo chmod +x /usr/local/bin/redis-backup.sh

# Schedule daily backups
echo "0 2 * * * /usr/local/bin/redis-backup.sh" | sudo crontab -

Auto-scaling Configuration

// src/services/AutoScaler.ts
import { redisService } from "./RedisService";
import AWS from "aws-sdk";

export class RedisAutoScaler {
	private ec2 = new AWS.EC2();
	private cloudWatch = new AWS.CloudWatch();

	async checkMetrics(): Promise<{
		cpuUsage: number;
		memoryUsage: number;
		connections: number;
		opsPerSecond: number;
	}> {
		try {
			// Get Redis metrics
			const info = await redisService.client.info();
			const lines = info.split("\r\n");

			const metrics = {
				cpuUsage: 0,
				memoryUsage: 0,
				connections: 0,
				opsPerSecond: 0,
			};

			lines.forEach((line) => {
				if (line.startsWith("used_memory_rss:")) {
					const memory = parseInt(line.split(":")[1]);
					metrics.memoryUsage =
						(memory / (32 * 1024 * 1024 * 1024)) * 100; // Assuming 32GB instances
				}
				if (line.startsWith("connected_clients:")) {
					metrics.connections = parseInt(line.split(":")[1]);
				}
				if (line.startsWith("instantaneous_ops_per_sec:")) {
					metrics.opsPerSecond = parseInt(line.split(":")[1]);
				}
			});

			return metrics;
		} catch (error) {
			console.error("Error checking metrics:", error);
			throw error;
		}
	}

	async scaleUp(): Promise<void> {
		console.log("Scaling up Redis cluster...");

		// Launch new instances
		const params = {
			LaunchTemplateName: "redis-stack-template",
			MinCount: 2,
			MaxCount: 2,
			TagSpecifications: [
				{
					ResourceType: "instance",
					Tags: [
						{
							Key: "Name",
							Value: "redis-scale-up",
						},
					],
				},
			],
		};

		try {
			const result = await this.ec2.runInstances(params).promise();
			console.log(
				"New instances launched:",
				result.Instances?.map((i) => i.InstanceId),
			);

			// Wait for instances to be ready and add to cluster
			// Implementation depends on your specific setup
		} catch (error) {
			console.error("Error scaling up:", error);
			throw error;
		}
	}

	async scaleDown(): Promise<void> {
		console.log("Scaling down Redis cluster...");
		// Implementation for scaling down
		// Remove replicas from cluster and terminate instances
	}

	async autoScale(): Promise<void> {
		const metrics = await this.checkMetrics();

		const scaleUpThreshold = {
			cpuUsage: 80,
			memoryUsage: 85,
			connections: 8000,
			opsPerSecond: 100000,
		};

		const scaleDownThreshold = {
			cpuUsage: 30,
			memoryUsage: 40,
			connections: 1000,
			opsPerSecond: 10000,
		};

		if (
			metrics.cpuUsage > scaleUpThreshold.cpuUsage ||
			metrics.memoryUsage > scaleUpThreshold.memoryUsage ||
			metrics.connections > scaleUpThreshold.connections ||
			metrics.opsPerSecond > scaleUpThreshold.opsPerSecond
		) {
			await this.scaleUp();
		} else if (
			metrics.cpuUsage < scaleDownThreshold.cpuUsage &&
			metrics.memoryUsage < scaleDownThreshold.memoryUsage &&
			metrics.connections < scaleDownThreshold.connections &&
			metrics.opsPerSecond < scaleDownThreshold.opsPerSecond
		) {
			## โœ… Production Checklist

### ๐ŸŽฏ Deployment Readiness Assessment

```mermaid
graph TD
    A[๐Ÿš€ Production Deployment] --> B{Infrastructure Ready?}
    B -->|โœ… Yes| C{Security Configured?}
    B -->|โŒ No| D[โš™๏ธ Setup Infrastructure]
    C -->|โœ… Yes| E{Monitoring Setup?}
    C -->|โŒ No| F[๐Ÿ”’ Configure Security]
    E -->|โœ… Yes| G{Performance Tested?}
    E -->|โŒ No| H[๐Ÿ“Š Setup Monitoring]
    G -->|โœ… Yes| I[๐ŸŽ‰ Deploy to Production]
    G -->|โŒ No| J[โšก Run Performance Tests]

    D --> B
    F --> C
    H --> E
    J --> G

    style I fill:#e8f5e8
    style A fill:#e1f5fe

๐Ÿ“‹ Pre-Deployment Checklist

๐Ÿ—๏ธ Infrastructure Requirements

  • EC2 Instances Configured

    • Appropriate instance types selected (r6i.xlarge recommended)
    • Enhanced networking enabled
    • Storage properly configured (gp3 SSD, 3000 IOPS)
    • All instances in correct AZs for high availability
  • Network Configuration

    • VPC with proper subnets configured
    • Security groups allowing Redis traffic (ports 7000-7005, 17000-17005)
    • Load balancer configured for Redis Insight access
    • DNS records configured (if using custom domains)
  • System Optimization

    • Kernel parameters optimized (vm.overcommit_memory=1, net.core.somaxconn=65535)
    • Transparent huge pages disabled
    • Memory and TCP settings optimized
    • File descriptor limits increased

๐Ÿ”ง Redis Configuration

  • Installation Complete

    • Redis Stack installed on all nodes
    • All required modules loaded (RedisJSON, RedisSearch, etc.)
    • Service files created and enabled
    • Proper file permissions set
  • Configuration Verified

    • Strong passwords generated and configured
    • Memory limits properly set
    • Persistence settings configured (AOF + RDB)
    • Cluster settings verified (if using cluster)
    • Dangerous commands renamed or disabled
  • Cluster Setup (if applicable)

    • All 6 nodes running and healthy
    • Cluster formation completed successfully
    • Hash slots properly distributed
    • Replication working correctly
    • Failover tested and verified

๐Ÿ”’ Security Hardening

  • Authentication & Authorization

    • Strong passwords implemented (>32 characters)
    • ACL users configured (if needed)
    • Dangerous commands disabled (FLUSHALL, FLUSHDB, etc.)
    • Admin commands password-protected
  • Network Security

    • Security groups properly configured
    • No unnecessary ports exposed
    • SSL/TLS certificates installed (if required)
    • VPC network isolation implemented
  • System Security

    • Redis running as non-root user
    • File permissions correctly set
    • SSH keys properly managed
    • System updates applied

๐Ÿ“Š Monitoring & Alerting

  • Health Monitoring

    • Health check scripts deployed
    • Performance monitoring active
    • Log aggregation configured
    • Dashboard accessible
  • Alert Configuration

    • Memory usage alerts (>85%)
    • Connection failure alerts
    • High latency alerts (>10ms)
    • Cluster state alerts
    • Alert channels configured (email, webhook, etc.)
  • Backup System

    • Automated backup scripts deployed
    • Backup verification tested
    • Restore procedures documented
    • S3 backup storage configured

๐Ÿงช Post-Deployment Verification

๐Ÿ” Health Checks

#!/bin/bash
# ๐Ÿงช Production Health Check Script

echo "๐Ÿ” Running production health checks..."

# Test basic connectivity
echo "๐Ÿ“ก Testing Redis connectivity..."
for port in 6379 7000 7001 7002 7003 7004 7005; do
    if redis-cli -p $port -a "$REDIS_PASSWORD" ping > /dev/null 2>&1; then
        echo "โœ… Port $port: OK"
    else
        echo "โŒ Port $port: FAILED"
        exit 1
    fi
done

# Test cluster status (if applicable)
if redis-cli -p 7000 -a "$REDIS_PASSWORD" cluster info > /dev/null 2>&1; then
    echo "๐Ÿ” Testing cluster status..."
    CLUSTER_STATE=$(redis-cli -p 7000 -a "$REDIS_PASSWORD" cluster info | grep cluster_state | cut -d: -f2 | tr -d '\r')

    if [ "$CLUSTER_STATE" = "ok" ]; then
        echo "โœ… Cluster state: OK"
    else
        echo "โŒ Cluster state: $CLUSTER_STATE"
        exit 1
    fi

    # Check node count
    NODE_COUNT=$(redis-cli -p 7000 -a "$REDIS_PASSWORD" cluster nodes | wc -l)
    if [ "$NODE_COUNT" -eq 6 ]; then
        echo "โœ… All 6 cluster nodes present"
    else
        echo "โŒ Expected 6 nodes, found $NODE_COUNT"
        exit 1
    fi
fi

# Test memory usage
echo "๐Ÿ’พ Checking memory usage..."
for port in 7000 7001 7002; do
    MEMORY_USAGE=$(redis-cli -p $port -a "$REDIS_PASSWORD" info memory | grep used_memory_human | cut -d: -f2 | tr -d '\r')
    echo "๐Ÿ“Š Node $port memory usage: $MEMORY_USAGE"
done

# Test performance
echo "โšก Running performance test..."
redis-benchmark -h localhost -p 7000 -a "$REDIS_PASSWORD" -n 10000 -t set,get -q
if [ $? -eq 0 ]; then
    echo "โœ… Performance test passed"
else
    echo "โŒ Performance test failed"
    exit 1
fi

echo "๐ŸŽ‰ All health checks passed!"

๐Ÿ“ˆ Performance Benchmarks

#!/bin/bash
# ๐Ÿ“Š Production Performance Benchmarks

echo "๐Ÿ“Š Running production performance benchmarks..."

# Single instance benchmarks
if redis-cli -p 6379 ping > /dev/null 2>&1; then
    echo "๐Ÿ”ด Single Instance Benchmarks:"
    redis-benchmark -h localhost -p 6379 -a "$REDIS_PASSWORD" -n 100000 -t set,get,hset,hget -P 16 -q
fi

# Cluster benchmarks
if redis-cli -p 7000 ping > /dev/null 2>&1; then
    echo "๐ŸŸก Cluster Benchmarks:"
    redis-benchmark -h localhost -p 7000 -a "$REDIS_PASSWORD" -n 100000 -t set,get,hset,hget -P 16 -q --cluster
fi

# Expected results for r6i.xlarge:
echo "๐Ÿ“‹ Expected Performance (r6i.xlarge):"
echo "   ๐Ÿ”ด Single Instance: ~50,000 ops/sec"
echo "   ๐ŸŸก Redis Cluster: ~200,000 ops/sec"
echo "   โšก Latency: <1ms for 99% of requests"
echo "   ๐Ÿ’พ Memory: <85% usage under normal load"

๐ŸŽฏ Performance Targets

Metric ๐Ÿ”ด Single Instance ๐ŸŸก Redis Cluster ๐ŸŸข DragonDB
Operations/sec 50,000+ 200,000+ 120,000+
Latency (P99) <2ms <1ms <1ms
Memory Efficiency 85% max 85% max 70% max
Availability 99.5% 99.9% 99.8%
Connection Limit 10,000 50,000 30,000

๐Ÿšจ Emergency Procedures

๐Ÿ”ฅ High Memory Usage (>90%)

# Emergency memory cleanup
echo "๐Ÿšจ High memory usage detected!"

# Check memory distribution
redis-cli -p 7000 -a "$REDIS_PASSWORD" info memory

# Find large keys
redis-cli -p 7000 -a "$REDIS_PASSWORD" --bigkeys

# Emergency cleanup (use with caution)
# redis-cli -p 7000 -a "$REDIS_PASSWORD" memory purge

# Scale up if needed
# aws ec2 modify-instance-attribute --instance-id i-xxx --instance-type r6i.2xlarge

๐Ÿ”Œ Connection Issues

# Diagnose connection problems
echo "๐Ÿ” Diagnosing connection issues..."

# Check network connectivity
nc -zv redis-node-ip 7000

# Check Redis process
ps aux | grep redis

# Check system resources
free -h
df -h

# Restart Redis if needed (last resort)
# sudo systemctl restart redis-7000

๐Ÿ”„ Cluster Split-Brain Recovery

# Recovery from cluster split-brain
echo "๐Ÿ”„ Recovering from cluster split-brain..."

# Check cluster state on all nodes
for port in 7000 7001 7002 7003 7004 7005; do
    echo "Node $port:"
    redis-cli -p $port -a "$REDIS_PASSWORD" cluster nodes | head -1
done

# If needed, reset and recreate cluster (DANGEROUS - DATA LOSS)
# for port in 7000 7001 7002 7003 7004 7005; do
#     redis-cli -p $port -a "$REDIS_PASSWORD" cluster reset hard
# done
#
# redis-cli --cluster create \
#   node1:7000 node2:7001 node3:7002 \
#   node4:7003 node5:7004 node6:7005 \
#   --cluster-replicas 1 -a "$REDIS_PASSWORD" --cluster-yes

๐Ÿ“š Operations Runbook

๐Ÿ”„ Daily Operations

  1. Morning Health Check

    • Run health check script
    • Review overnight alerts
    • Check memory and CPU usage
    • Verify backup completion
  2. Performance Review

    • Check dashboard metrics
    • Review slow query log
    • Monitor hit rates
    • Analyze traffic patterns
  3. Maintenance Tasks

    • Log rotation
    • Backup verification
    • Security updates (if needed)
    • Documentation updates

๐Ÿ“… Weekly Operations

  1. Performance Analysis

    • Weekly performance report
    • Capacity planning review
    • Trend analysis
    • Optimization opportunities
  2. Security Review

    • Access log review
    • Security patch updates
    • Certificate expiration check
    • Password rotation (if required)
  3. Disaster Recovery Test

    • Backup restore test
    • Failover simulation
    • Recovery time measurement
    • Documentation updates

๐ŸŽ‰ Success Metrics

๐Ÿ“Š Key Performance Indicators

  • ๐Ÿš€ Performance: >200K ops/sec sustained
  • โšก Latency: <1ms for 99% of requests
  • ๐Ÿ’พ Memory: <85% utilization
  • ๐Ÿ”„ Availability: >99.9% uptime
  • ๐ŸŽฏ Hit Rate: >95% cache hits
  • ๐Ÿ”— Connections: Handle 50K+ concurrent

๐Ÿ“ˆ Business Impact

  • ๐Ÿ’ฐ Cost Savings: 60-70% vs managed Redis
  • โšก Performance Improvement: 5x faster than traditional DB
  • ๐Ÿ”ง Operational Efficiency: Automated monitoring and alerts
  • ๐Ÿ“ฑ User Experience: Sub-second response times
  • ๐Ÿš€ Scalability: Horizontal scaling capability

๐ŸŽŠ Conclusion

๐Ÿ† What You've Accomplished

Congratulations! You've successfully implemented a world-class Redis infrastructure that combines:

graph LR
    A[๐ŸŽฏ Your Achievement] --> B[โšก Performance]
    A --> C[๐Ÿ”’ Security]
    A --> D[๐Ÿ“Š Monitoring]
    A --> E[๐Ÿ’ฐ Cost Efficiency]

    B --> B1[200K+ ops/sec]
    B --> B2[Sub-ms latency]

    C --> C1[Enterprise security]
    C --> C2[Network isolation]

    D --> D1[Real-time dashboards]
    D --> D2[Proactive alerts]

    E --> E1[70% cost savings]
    E --> E2[Self-hosted control]

    style A fill:#e8f5e8
    style B fill:#ffcdd2
    style C fill:#e1f5fe
    style D fill:#fff3e0
    style E fill:#f3e5f5
Loading

๐Ÿš€ Key Benefits Delivered

๐Ÿ’ช Performance Excellence

  • Sub-millisecond latency for 99% of requests
  • 200,000+ operations per second sustained throughput
  • Horizontal scaling capability with Redis Cluster
  • Advanced caching patterns with Redis Stack modules

๐Ÿ”’ Enterprise Security

  • Multi-layered security with authentication, network isolation, and encryption
  • Dangerous command protection and access control
  • Regular security auditing and monitoring
  • Compliance-ready infrastructure

๐Ÿ“Š Operational Excellence

  • Comprehensive monitoring with real-time dashboards
  • Proactive alerting for issue prevention
  • Automated backup and recovery procedures
  • Performance optimization tools and techniques

๐Ÿ’ฐ Cost Optimization

  • 60-70% cost savings compared to managed Redis services
  • Predictable pricing with self-hosted infrastructure
  • Resource optimization for maximum efficiency
  • No vendor lock-in with open-source solutions

๐ŸŽฏ Deployment Options Summary

Scenario Recommended Solution Performance Complexity Cost
๐Ÿงช Development Docker + Single Instance 10K ops/sec Low $25/month
๐Ÿ“ฑ Small Apps Single Instance on r5.large 50K ops/sec Low $120/month
๐Ÿš€ Production Redis Cluster on r6i.xlarge 200K+ ops/sec Medium $1,800/month
๐Ÿ”ฎ Modern Stack DragonDB Alternative 120K ops/sec Low $300/month

๐ŸŒŸ Next Steps

๐Ÿ“ˆ Immediate Actions

  1. Deploy to staging environment first
  2. Run comprehensive tests with your workload
  3. Train your team on operations procedures
  4. Set up monitoring and alerting
  5. Document custom configurations

๐Ÿš€ Advanced Optimizations

  1. Implement auto-scaling based on metrics
  2. Add geo-replication for global applications
  3. Integrate with CI/CD pipelines
  4. Explore Redis modules for specific use cases
  5. Optimize for your specific workload

๐ŸŽ“ Continuous Learning

  1. Monitor Redis community for updates
  2. Participate in Redis conferences and webinars
  3. Contribute to open-source projects
  4. Share knowledge with your team
  5. Stay updated on security best practices

๐Ÿ“ž Support & Resources

๐Ÿ†˜ When You Need Help

  • Redis Official Documentation: redis.io/documentation
  • Redis Community: Redis Discord
  • Stack Overflow: Tag your questions with redis
  • GitHub Issues: For specific Redis modules
  • AWS Support: For EC2 and infrastructure issues

๐Ÿ“š Additional Learning

  • Redis University: Free online courses
  • Redis Labs Blog: Latest updates and best practices
  • AWS Well-Architected Framework: Infrastructure best practices
  • Monitoring Best Practices: Prometheus, Grafana integration
  • Performance Tuning Guides: Advanced optimization techniques

๐Ÿ™ Final Words

You now have a production-ready, high-performance Redis infrastructure that can handle enterprise-scale workloads while maintaining cost efficiency and operational excellence.

Remember:

  • ๐Ÿ” Monitor continuously - Prevention is better than cure
  • ๐Ÿ“š Document everything - Your future self will thank you
  • ๐Ÿงช Test thoroughly - Especially before production changes
  • ๐Ÿค Share knowledge - Help your team and the community
  • ๐Ÿš€ Keep optimizing - There's always room for improvement

You've built something amazing! ๐ŸŽ‰


Happy caching! May your latencies be low and your throughput be high! โšก๐Ÿš€


๐Ÿ“‹ Quick Reference Commands

# Health check
redis-cli -p 7000 -a "password" ping

# Cluster status
redis-cli -p 7000 -a "password" cluster info

# Memory usage
redis-cli -p 7000 -a "password" info memory

# Performance test
redis-benchmark -h localhost -p 7000 -a "password" -n 10000 -t set,get -q

# Monitor real-time
redis-cli -p 7000 -a "password" --stat

# View slow queries
redis-cli -p 7000 -a "password" slowlog get 10

๐Ÿท๏ธ Tags

#Redis #AWS #EC2 #Performance #Clustering #NodeJS #TypeScript #DevOps #Database #Caching #Monitoring #Production #Self-Hosted #Cost-Optimization


Document Version: 2.0
Last Updated: July 2025
Compatibility: Redis 7.2+, Node.js 18+, AWS EC2
Maintained by: Your DevOps Team โค๏ธ

This guide provides a comprehensive setup for a production-ready Redis Stack cluster on AWS EC2 with high throughput and low latency capabilities. The configuration includes:

  • High Performance: Optimized for sub-millisecond latency and high throughput
  • Scalability: Cluster setup with automatic failover and horizontal scaling capabilities
  • Security: SSL/TLS encryption, authentication, and network security
  • Monitoring: Comprehensive monitoring and alerting system
  • Developer Experience: Full TypeScript/Node.js integration with Redis Insight UI
  • Cost Optimization: Self-hosted solution reducing cloud service costs

Key Benefits

  1. Performance: 200,000+ operations per second with sub-millisecond latency
  2. Reliability: 99.9% uptime with automatic failover
  3. Cost Savings: 60-70% cost reduction compared to managed Redis services
  4. Full Feature Set: All Redis Stack modules available
  5. Developer Productivity: Complete TypeScript integration and Redis Insight UI

Next Steps

  1. Deploy the cluster in a staging environment first
  2. Run comprehensive performance tests
  3. Train your team on operations procedures
  4. Set up monitoring and alerting
  5. Plan for disaster recovery scenarios

For additional support or questions, refer to the Redis documentation or consult with your DevOps team.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment