A Visual Journey to High-Performance Database Solutions
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ฏ Choose Your Adventure โ
โ โ
โ ๐ด Single Instance (Simple) โ
โ ๐ก Redis Cluster (Scalable) โ
โ ๐ข DragonDB Alternative โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
- ๐ด Single Instance Setup - Start here for simplicity
- ๐ก Redis Cluster Setup - Scale to millions of operations
- ๐ข DragonDB Alternative - Modern Rust-based solution
- ๐ Overview & Comparison
- ๐๏ธ Architecture Patterns
- ๐ Prerequisites
- โ๏ธ EC2 Infrastructure
- โ๏ธ Installation Methods
- ๐ Performance Tuning
- ๐จ Management UI Setup
- ๐ป Node.js Integration
- ๐ Monitoring & Alerts
- ๐ Security Hardening
- ๐ ๏ธ Troubleshooting
- โ Production Checklist
This comprehensive guide provides three distinct deployment paths for high-performance in-memory databases:
graph TD
A[Choose Your Path] --> B[๐ด Single Instance]
A --> C[๐ก Redis Cluster]
A --> D[๐ข DragonDB]
B --> B1[Perfect for: Development<br/>Small to Medium Apps<br/>< 10K ops/sec]
C --> C1[Perfect for: Production<br/>High Throughput<br/>200K+ ops/sec]
D --> D1[Perfect for: Modern Stack<br/>Rust Performance<br/>Redis Compatible]
style A fill:#e1f5fe
style B fill:#ffebee
style C fill:#fff3e0
style D fill:#e8f5e8
| Feature | ๐ด Single Instance | ๐ก Redis Cluster | ๐ข DragonDB |
|---|---|---|---|
| Complexity | ๐ข Simple | ๐ก Moderate | ๐ข Simple |
| Performance | 50K ops/sec | 200K+ ops/sec | 100K+ ops/sec |
| Memory | 32GB max | Unlimited | 64GB+ |
| High Availability | โ No | โ Yes | โ Yes |
| Cost | ๐ฐ Low | ๐ฐ๐ฐ Medium | ๐ฐ Low |
| Setup Time | 15 mins | 2 hours | 30 mins |
| Production Ready | For small apps | โ Enterprise | โ Modern |
| ๐ Performance Highlights |
|---|
| โก Sub-millisecond latency |
| ๐ฅ 200,000+ operations/second |
| ๐พ Multi-module support |
| ๐ Real-time replication |
Perfect for: Development, small applications, learning Redis
โ
Good for: โ Avoid for:
โข Development environments โข Production with >10K users
โข Small applications โข High availability requirements
โข Learning and testing โข Data larger than 32GB
โข Cost-sensitive projects โข Critical business applications
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ Your App โ
โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ก Load Balancer โ
โ (Optional) โ
โโโโโโโโโโโโโฌโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ด Redis Instance โ
โ โข All Redis Modules โ
โ โข 32GB Memory โ
โ โข Auto-persistence โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ
#!/bin/bash
# ๐ One-click Redis installation script
echo "๐ด Starting Redis Single Instance Setup..."
# System optimization
sudo sysctl -w vm.overcommit_memory=1
sudo sysctl -w net.core.somaxconn=65535
# Install Redis Stack
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
sudo yum install -y https://download.redis.io/redis-stack/redis-stack-server-7.2.0-v9.rhel7.x86_64.rpm
# Create optimized configuration
sudo tee /etc/redis/redis.conf << 'EOF'
# ๐ด Single Instance Configuration
port 6379
bind 0.0.0.0
protected-mode yes
requirepass your-strong-password
# Memory settings
maxmemory 28gb
maxmemory-policy allkeys-lru
# Persistence
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
# Performance
tcp-keepalive 300
timeout 0
tcp-backlog 511
# Redis Stack modules
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisjson.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/redisbloom.so
EOF
# Start Redis
sudo systemctl enable redis-stack-server
sudo systemctl start redis-stack-server
echo "โ
Redis Single Instance is ready!"
echo "๐ Connect at: localhost:6379"
echo "๐ Password: your-strong-password"// ๐ด Single Instance Redis Client
import { createClient } from "redis";
export class SingleRedisClient {
private client;
constructor() {
this.client = createClient({
url: "redis://:your-strong-password@localhost:6379",
socket: {
reconnectStrategy: (retries) => Math.min(retries * 50, 1000),
},
});
this.client.on("error", (err) =>
console.log("โ Redis Client Error", err),
);
this.client.on("connect", () => console.log("โ
Redis Connected"));
}
async connect() {
await this.client.connect();
}
// ๐ฏ Simple operations
async set(key: string, value: any, ttl?: number): Promise<void> {
const stringValue =
typeof value === "object" ? JSON.stringify(value) : value;
if (ttl) {
await this.client.setEx(key, ttl, stringValue);
} else {
await this.client.set(key, stringValue);
}
}
async get(key: string): Promise<any> {
const value = await this.client.get(key);
try {
return JSON.parse(value || "");
} catch {
return value;
}
}
// ๐จ JSON operations
async setJSON(key: string, data: object): Promise<void> {
await this.client.json.set(key, "$", data);
}
async getJSON(key: string): Promise<any> {
return await this.client.json.get(key);
}
// ๐ Search operations
async search(index: string, query: string): Promise<any> {
return await this.client.ft.search(index, query);
}
async disconnect() {
await this.client.disconnect();
}
}
// ๐ Usage Example
export const redis = new SingleRedisClient();Perfect for: Production applications, high throughput, enterprise scale
โ
Perfect for: โ ๏ธ Consider complexity:
โข Production applications โข Requires DevOps knowledge
โข >50K operations/second โข Higher operational overhead
โข Multi-gigabyte datasets โข Cross-slot operations limited
โข High availability needs โข More monitoring required
โข Enterprise applications โข Network configuration
graph TB
subgraph "๐ Application Layer"
APP[Your Node.js App]
LB[Load Balancer]
end
subgraph "๐ Redis Cluster Layer"
subgraph "Master Nodes"
M1[๐ด Master 1<br/>Port: 7000<br/>Slots: 0-5460]
M2[๐ด Master 2<br/>Port: 7001<br/>Slots: 5461-10922]
M3[๐ด Master 3<br/>Port: 7002<br/>Slots: 10923-16383]
end
subgraph "Replica Nodes"
R1[๐ต Replica 1<br/>Port: 7003<br/>โ Master 1]
R2[๐ต Replica 2<br/>Port: 7004<br/>โ Master 2]
R3[๐ต Replica 3<br/>Port: 7005<br/>โ Master 3]
end
end
APP --> LB
LB --> M1
LB --> M2
LB --> M3
M1 -.-> R1
M2 -.-> R2
M3 -.-> R3
style M1 fill:#ffcdd2
style M2 fill:#ffcdd2
style M3 fill:#ffcdd2
style R1 fill:#e3f2fd
style R2 fill:#e3f2fd
style R3 fill:#e3f2fd
๐ How keys are distributed:
Key: "user:123" โ Hash: 5984 โ Node: Master 2 (7001)
Key: "session:abc" โ Hash: 12000 โ Node: Master 3 (7002)
Key: "cache:xyz" โ Hash: 3000 โ Node: Master 1 (7000)
Each master handles ~5,461 hash slots (16,384 total รท 3 masters)
sequenceDiagram
participant App as ๐ฑ Application
participant M1 as ๐ด Master 1
participant R1 as ๐ต Replica 1
participant Cluster as ๐ Cluster
App->>M1: Write Request
Note over M1: โ Master 1 Crashes
Cluster->>Cluster: ๐จ Detect Failure (3 sec)
Cluster->>R1: ๐ฏ Promote to Master
R1->>Cluster: โ
Ready as Master
App->>R1: Write Request (now master)
Note over R1: ๐ข Service Restored
Perfect for: Modern applications, Rust performance, Redis compatibility
DragonDB is a modern, Rust-based in-memory database that provides Redis compatibility with enhanced performance and memory efficiency.
graph LR
subgraph "๐ด Traditional Redis"
R1[C Language]
R2[Single-threaded]
R3[GIL Limitations]
end
subgraph "๐ข DragonDB"
D1[Rust Language]
D2[Multi-threaded]
D3[Memory Safe]
end
R1 --> D1
R2 --> D2
R3 --> D3
style D1 fill:#e8f5e8
style D2 fill:#e8f5e8
style D3 fill:#e8f5e8
โ
Perfect for: โ ๏ธ Consider:
โข Modern applications โข Newer technology (less mature)
โข High memory efficiency โข Smaller community
โข Multi-threaded workloads โข Limited third-party tools
โข Rust ecosystem integration โข Documentation still growing
โข Enhanced security needs โข Fewer Redis modules
| Metric | ๐ด Redis | ๐ข DragonDB | ๐ฏ Improvement |
|---|---|---|---|
| Memory Usage | 100% | 60% | 40% less memory |
| CPU Efficiency | 100% | 150% | 50% better CPU |
| Throughput | 100K ops/sec | 120K ops/sec | 20% faster |
| Startup Time | 5 seconds | 2 seconds | 60% faster |
#!/bin/bash
# ๐ข DragonDB Installation Script
echo "๐ Starting DragonDB Installation..."
# Install Rust (required for DragonDB)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
# Download and install DragonDB
wget https://github.com/dragonflydb/dragonfly/releases/latest/download/dragonfly-x86_64.tar.gz
tar -xzf dragonfly-x86_64.tar.gz
sudo mv dragonfly /usr/local/bin/
# Create DragonDB configuration
sudo mkdir -p /etc/dragondb
sudo tee /etc/dragondb/dragonfly.conf << 'EOF'
# ๐ข DragonDB Configuration
port 6379
bind 0.0.0.0
requirepass your-strong-password
# Memory settings
maxmemory 28gb
maxmemory_policy allkeys_lru
# Performance
tcp_keepalive 300
timeout 0
# Logging
logfile /var/log/dragondb/dragonfly.log
loglevel notice
# Multi-threading (DragonDB advantage)
proactor_threads 8
EOF
# Create systemd service
sudo tee /etc/systemd/system/dragondb.service << 'EOF'
[Unit]
Description=DragonDB Server
After=network.target
[Service]
Type=simple
User=redis
ExecStart=/usr/local/bin/dragonfly --flagfile=/etc/dragondb/dragonfly.conf
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
EOF
# Start DragonDB
sudo systemctl enable dragondb
sudo systemctl start dragondb
echo "โ
DragonDB is ready!"
echo "๐ Connect at: localhost:6379"
echo "๐ Password: your-strong-password"
echo "๐ Enjoy the performance boost!"// ๐ข DragonDB Client (Redis-compatible)
import { createClient } from "redis";
export class DragonDBClient {
private client;
constructor() {
this.client = createClient({
url: "redis://:your-strong-password@localhost:6379",
socket: {
reconnectStrategy: (retries) => Math.min(retries * 50, 1000),
},
});
this.client.on("error", (err) => console.log("โ DragonDB Error", err));
this.client.on("connect", () => console.log("๐ DragonDB Connected"));
}
// All Redis commands work the same!
async set(key: string, value: any, ttl?: number): Promise<void> {
const stringValue =
typeof value === "object" ? JSON.stringify(value) : value;
if (ttl) {
await this.client.setEx(key, ttl, stringValue);
} else {
await this.client.set(key, stringValue);
}
}
async get(key: string): Promise<any> {
const value = await this.client.get(key);
try {
return JSON.parse(value || "");
} catch {
return value;
}
}
// ๐ DragonDB specific optimizations
async bulkSet(data: Record<string, any>): Promise<void> {
const pipeline = this.client.multi();
Object.entries(data).forEach(([key, value]) => {
const stringValue =
typeof value === "object" ? JSON.stringify(value) : value;
pipeline.set(key, stringValue);
});
await pipeline.exec();
}
async healthCheck(): Promise<{ status: string; memory: string }> {
try {
const ping = await this.client.ping();
const info = await this.client.info("memory");
return {
status: ping === "PONG" ? "healthy" : "unhealthy",
memory:
info
.split("\n")
.find((line) => line.startsWith("used_memory_human:"))
?.split(":")[1] || "unknown",
};
} catch (error) {
return { status: "unhealthy", memory: "unknown" };
}
}
async connect() {
await this.client.connect();
}
async disconnect() {
await this.client.disconnect();
}
}
// ๐ Usage
export const dragondb = new DragonDBClient();// ๐ Simple migration script
import { redis } from "./redis-client";
import { dragondb } from "./dragondb-client";
export class RedToDragonMigration {
async migrateData(keyPattern: string = "*"): Promise<void> {
console.log("๐ Starting Redis โ DragonDB migration...");
try {
// Connect to both databases
await redis.connect();
await dragondb.connect();
// Get all keys matching pattern
const keys = await redis.client.keys(keyPattern);
console.log(`๐ Found ${keys.length} keys to migrate`);
// Migrate in batches
const batchSize = 1000;
for (let i = 0; i < keys.length; i += batchSize) {
const batch = keys.slice(i, i + batchSize);
const migration = batch.map(async (key) => {
const value = await redis.get(key);
const ttl = await redis.client.ttl(key);
if (ttl > 0) {
await dragondb.client.setEx(key, ttl, value);
} else {
await dragondb.set(key, value);
}
});
await Promise.all(migration);
console.log(
`โ
Migrated batch ${
Math.floor(i / batchSize) + 1
}/${Math.ceil(keys.length / batchSize)}`,
);
}
console.log("๐ Migration completed successfully!");
} catch (error) {
console.error("โ Migration failed:", error);
throw error;
} finally {
await redis.disconnect();
await dragondb.disconnect();
}
}
}graph TD
A[๐ Prerequisites] --> B[โ๏ธ AWS Setup]
A --> C[๐ป Local Tools]
A --> D[๐ง Knowledge]
B --> B1[โ
AWS CLI configured]
B --> B2[๐ VPC with subnets]
B --> B3[๐ Security groups]
B --> B4[๐ EC2 Key pairs]
C --> C1[๐ฆ Node.js 18+]
C --> C2[๐ ๏ธ TypeScript]
C --> C3[๐ก SSH client]
C --> C4[๐ณ Docker optional]
D --> D1[โก Redis basics]
D --> D2[โ๏ธ AWS EC2]
D --> D3[๐ง Linux commands]
D --> D4[๐ป Node.js/TS]
style A fill:#e1f5fe
style B fill:#fff3e0
style C fill:#e8f5e8
style D fill:#fce4ec
| Requirement | Description | How to Check |
|---|---|---|
| ๐ง AWS CLI | Configured with appropriate permissions | aws sts get-caller-identity |
| ๐ VPC | With public/private subnets | aws ec2 describe-vpcs |
| ๐ Security Groups | Configured for Redis traffic | aws ec2 describe-security-groups |
| ๐ Key Pair | For EC2 SSH access | aws ec2 describe-key-pairs |
# ๐ Quick prerequisites check script
#!/bin/bash
echo "๐ Checking prerequisites..."
# Check Node.js
if command -v node &> /dev/null; then
echo "โ
Node.js: $(node --version)"
else
echo "โ Node.js not found. Install from https://nodejs.org"
fi
# Check npm/yarn
if command -v npm &> /dev/null; then
echo "โ
npm: $(npm --version)"
elif command -v yarn &> /dev/null; then
echo "โ
yarn: $(yarn --version)"
else
echo "โ Package manager not found"
fi
# Check TypeScript
if command -v tsc &> /dev/null; then
echo "โ
TypeScript: $(tsc --version)"
else
echo "โ ๏ธ TypeScript not found. Install: npm install -g typescript"
fi
# Check AWS CLI
if command -v aws &> /dev/null; then
echo "โ
AWS CLI: $(aws --version)"
aws sts get-caller-identity &> /dev/null && echo "โ
AWS credentials configured" || echo "โ AWS credentials not configured"
else
echo "โ AWS CLI not found. Install from https://aws.amazon.com/cli/"
fi
echo "๐ Prerequisites check complete!"๐ข Beginner Level (Single Instance)
- Basic Redis commands (
SET,GET,HSET) - Understanding of key-value stores
- Basic Node.js/TypeScript
๐ก Intermediate Level (Redis Cluster)
- Redis clustering concepts
- AWS EC2 management
- Linux system administration
- Network configuration
๐ด Advanced Level (Production)
- DevOps practices
- Monitoring and alerting
- Security hardening
- Performance tuning
graph TD
A[Choose Instance Size] --> B{Application Type}
B -->|๐งช Development/Testing| C[t3.medium<br/>2 vCPU, 4GB RAM<br/>๐ฐ $24/month]
B -->|๐ฑ Small Production| D[r5.large<br/>2 vCPU, 16GB RAM<br/>๐ฐ $121/month]
B -->|๐ High Performance| E[r6i.xlarge<br/>4 vCPU, 32GB RAM<br/>๐ฐ $302/month]
B -->|๐ข Enterprise| F[r6i.2xlarge<br/>8 vCPU, 64GB RAM<br/>๐ฐ $605/month]
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#ffcdd2
style F fill:#f3e5f5
# Perfect for learning and development
Instance Type: t3.medium
vCPUs: 2
Memory: 4GB
Storage: 20GB gp3 SSD
Network: Up to 5 Gbps
Cost: ~$24/month
# Performance expectations:
- Operations/sec: ~10,000
- Concurrent connections: ~1,000
- Memory capacity: ~3GB usable# Recommended for production workloads
Instance Type: r6i.xlarge
vCPUs: 4
Memory: 32GB
Storage: 100GB gp3 SSD (3000 IOPS)
Network: Up to 12.5 Gbps
Cost: ~$302/month
# Performance expectations:
- Operations/sec: ~200,000
- Concurrent connections: ~10,000
- Memory capacity: ~28GB usable
- Latency: <1ms for 99% requestsflowchart TD
A[๐ Start Installation] --> B{What's your goal?}
B -->|๐งช Quick Testing| C[๐ฆ Docker Method<br/>โฑ๏ธ 5 minutes]
B -->|๐ด Single Instance| D[๐ป Direct Install<br/>โฑ๏ธ 15 minutes]
B -->|๐ก Redis Cluster| E[๐๏ธ Cluster Setup<br/>โฑ๏ธ 2 hours]
B -->|๐ข DragonDB| F[๐ Modern Alternative<br/>โฑ๏ธ 30 minutes]
C --> G[๐ Ready to Use!]
D --> G
E --> G
F --> G
style A fill:#e1f5fe
style C fill:#e8f5e8
style D fill:#ffebee
style E fill:#fff3e0
style F fill:#f1f8e9
style G fill:#e8f5e8
Perfect for: Testing, development, quick demos
#!/bin/bash
# ๐ณ Docker Redis Stack Installation
echo "๐ณ Installing Redis with Docker..."
# Pull Redis Stack image
docker pull redis/redis-stack:latest
# Run Redis Stack with all modules
docker run -d \
--name redis-stack \
--restart unless-stopped \
-p 6379:6379 \
-p 8001:8001 \
-e REDIS_ARGS="--requirepass your-strong-password" \
-v redis-data:/data \
redis/redis-stack:latest
echo "โ
Redis Stack is running!"
echo "๐ Redis: localhost:6379"
echo "๐จ Redis Insight: http://localhost:8001"
echo "๐ Password: your-strong-password"
# Test connection
echo "๐งช Testing connection..."
docker exec redis-stack redis-cli pingDocker Compose Setup:
# docker-compose.yml
version: "3.8"
services:
redis-stack:
image: redis/redis-stack:latest
container_name: redis-stack
restart: unless-stopped
ports:
- "6379:6379"
- "8001:8001"
environment:
- REDIS_ARGS=--requirepass your-strong-password
volumes:
- redis-data:/data
- ./redis.conf:/redis-stack.conf
command: redis-stack-server /redis-stack.conf
volumes:
redis-data:Perfect for: Production single instance, full control
#!/bin/bash
# ๐ด Production Redis Installation Script
set -e # Exit on error
echo "๐ด Starting Production Redis Installation..."
# System updates and dependencies
sudo yum update -y
sudo yum groupinstall -y "Development Tools"
sudo yum install -y gcc gcc-c++ make wget curl
# System optimization
echo "โ๏ธ Optimizing system parameters..."
sudo sysctl -w vm.overcommit_memory=1
sudo sysctl -w net.core.somaxconn=65535
sudo sysctl -w net.ipv4.tcp_max_syn_backlog=65535
# Make settings permanent
cat << 'EOF' | sudo tee -a /etc/sysctl.conf
# Redis optimizations
vm.overcommit_memory = 1
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
vm.swappiness = 1
net.core.rmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_default = 262144
net.core.wmem_max = 16777216
EOF
# Disable transparent huge pages
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
echo never | sudo tee /sys/kernel/mm/transparent_hugepage/defrag
# Install Redis Stack
echo "๐ฆ Installing Redis Stack..."
sudo yum install -y https://download.redis.io/redis-stack/redis-stack-server-7.2.0-v9.rhel7.x86_64.rpm
# Create Redis user and directories
sudo useradd --system --home /var/lib/redis --shell /bin/false redis
sudo mkdir -p /var/lib/redis /var/log/redis /etc/redis
sudo chown -R redis:redis /var/lib/redis /var/log/redis /etc/redis
# Generate strong password
REDIS_PASSWORD=$(openssl rand -base64 32)
echo "๐ Generated password: $REDIS_PASSWORD"
echo "$REDIS_PASSWORD" | sudo tee /etc/redis/password.txt
sudo chmod 600 /etc/redis/password.txt
sudo chown redis:redis /etc/redis/password.txt
# Create optimized configuration
sudo tee /etc/redis/redis.conf << EOF
# ๐ด Production Redis Configuration
port 6379
bind 0.0.0.0
protected-mode yes
requirepass $REDIS_PASSWORD
# Memory management
maxmemory 28gb
maxmemory-policy allkeys-lru
maxmemory-samples 5
# Persistence
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# Network and performance
tcp-keepalive 300
timeout 0
tcp-backlog 511
databases 16
# Logging
loglevel notice
logfile /var/log/redis/redis.log
syslog-enabled yes
# Security
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command KEYS ""
rename-command CONFIG "CONFIG_$REDIS_PASSWORD"
# Redis Stack modules
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisjson.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
loadmodule /opt/redis-stack/lib/redisbloom.so
loadmodule /opt/redis-stack/lib/redisgraph.so
EOF
# Create systemd service
sudo tee /etc/systemd/system/redis.service << 'EOF'
[Unit]
Description=Redis In-Memory Data Store
After=network.target
[Service]
User=redis
Group=redis
ExecStart=/opt/redis-stack/bin/redis-server /etc/redis/redis.conf
ExecReload=/bin/kill -USR2 $MAINPID
TimeoutStopSec=0
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
EOF
# Start Redis
sudo systemctl daemon-reload
sudo systemctl enable redis
sudo systemctl start redis
# Verify installation
echo "๐งช Verifying installation..."
sudo systemctl status redis
redis-cli -a "$REDIS_PASSWORD" ping
echo "โ
Redis installation completed successfully!"
echo "๐ Connection: localhost:6379"
echo "๐ Password saved in: /etc/redis/password.txt"
echo "๐ Configuration: /etc/redis/redis.conf"
echo "๐ Logs: /var/log/redis/redis.log"Perfect for: High availability, horizontal scaling, enterprise
#!/bin/bash
# ๐ก Redis Cluster Installation Script
set -e
CLUSTER_NODES=6
REDIS_PASSWORD=$(openssl rand -base64 32)
echo "๐ก Installing Redis Cluster ($CLUSTER_NODES nodes)..."
echo "๐ Generated cluster password: $REDIS_PASSWORD"
# Function to create node configuration
create_node_config() {
local port=$1
local node_dir="/var/lib/redis/cluster/$port"
sudo mkdir -p "$node_dir"
sudo tee "/etc/redis/redis-$port.conf" << EOF
# Redis Cluster Node $port Configuration
port $port
bind 0.0.0.0
protected-mode yes
requirepass $REDIS_PASSWORD
masterauth $REDIS_PASSWORD
# Cluster settings
cluster-enabled yes
cluster-config-file nodes-$port.conf
cluster-node-timeout 15000
cluster-announce-ip $(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
cluster-announce-port $port
cluster-announce-bus-port $((port + 10000))
# Memory and performance
maxmemory 4gb
maxmemory-policy allkeys-lru
tcp-keepalive 300
timeout 0
# Persistence
appendonly yes
appendfilename "appendonly-$port.aof"
appendfsync everysec
# Logging
logfile /var/log/redis/redis-$port.log
loglevel notice
# Working directory
dir $node_dir
# Redis Stack modules
loadmodule /opt/redis-stack/lib/redisearch.so
loadmodule /opt/redis-stack/lib/redisjson.so
loadmodule /opt/redis-stack/lib/redistimeseries.so
EOF
# Create systemd service for this node
sudo tee "/etc/systemd/system/redis-$port.service" << EOF
[Unit]
Description=Redis Cluster Node $port
After=network.target
[Service]
Type=notify
User=redis
Group=redis
ExecStart=/opt/redis-stack/bin/redis-server /etc/redis/redis-$port.conf
ExecReload=/bin/kill -USR2 \$MAINPID
TimeoutStopSec=0
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
EOF
}
# Install Redis Stack (same as single instance)
echo "๐ฆ Installing Redis Stack..."
sudo yum install -y https://download.redis.io/redis-stack/redis-stack-server-7.2.0-v9.rhel7.x86_64.rpm
# Create Redis user and directories
sudo useradd --system --home /var/lib/redis --shell /bin/false redis 2>/dev/null || true
sudo mkdir -p /var/lib/redis/cluster /var/log/redis /etc/redis
sudo chown -R redis:redis /var/lib/redis /var/log/redis /etc/redis
# Create cluster nodes
echo "๐๏ธ Creating cluster nodes..."
for port in 7000 7001 7002 7003 7004 7005; do
echo "Creating node on port $port..."
create_node_config $port
sudo systemctl enable redis-$port
sudo systemctl start redis-$port
done
# Wait for nodes to start
echo "โณ Waiting for nodes to start..."
sleep 10
# Create cluster
echo "๐ Creating cluster..."
LOCAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
redis-cli --cluster create \
$LOCAL_IP:7000 $LOCAL_IP:7001 $LOCAL_IP:7002 \
$LOCAL_IP:7003 $LOCAL_IP:7004 $LOCAL_IP:7005 \
--cluster-replicas 1 \
-a "$REDIS_PASSWORD" \
--cluster-yes
echo "โ
Redis Cluster installation completed!"
echo "๐ Cluster nodes: $LOCAL_IP:7000-7005"
echo "๐ Password: $REDIS_PASSWORD"
echo "๐ Test cluster: redis-cli -c -p 7000 -a '$REDIS_PASSWORD' cluster info"
# Save cluster info
cat << EOF | sudo tee /etc/redis/cluster-info.txt
Redis Cluster Information
========================
Password: $REDIS_PASSWORD
Nodes: $LOCAL_IP:7000, $LOCAL_IP:7001, $LOCAL_IP:7002, $LOCAL_IP:7003, $LOCAL_IP:7004, $LOCAL_IP:7005
Connection examples:
redis-cli -c -p 7000 -a '$REDIS_PASSWORD'
redis-cli -c -h $LOCAL_IP -p 7000 -a '$REDIS_PASSWORD'
EOF
sudo chmod 600 /etc/redis/cluster-info.txt
sudo chown redis:redis /etc/redis/cluster-info.txt#!/bin/bash
# ๐ข DragonDB Installation Script
echo "๐ Installing DragonDB..."
# Install Rust (required for DragonDB)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source ~/.cargo/env
# Download and install DragonDB
DRAGONDB_VERSION="v1.12.0"
wget "https://github.com/dragonflydb/dragonfly/releases/download/$DRAGONDB_VERSION/dragonfly-x86_64.tar.gz"
tar -xzf dragonfly-x86_64.tar.gz
sudo mv dragonfly /usr/local/bin/
sudo chmod +x /usr/local/bin/dragonfly
# Create DragonDB user and directories
sudo useradd --system --home /var/lib/dragondb --shell /bin/false dragondb 2>/dev/null || true
sudo mkdir -p /var/lib/dragondb /var/log/dragondb /etc/dragondb
sudo chown -R dragondb:dragondb /var/lib/dragondb /var/log/dragondb /etc/dragondb
# Generate password
DRAGONDB_PASSWORD=$(openssl rand -base64 32)
echo "๐ Generated DragonDB password: $DRAGONDB_PASSWORD"
# Create configuration
sudo tee /etc/dragondb/dragonfly.conf << EOF
# ๐ข DragonDB Configuration
port 6379
bind 0.0.0.0
requirepass $DRAGONDB_PASSWORD
# Memory settings
maxmemory 28gb
maxmemory_policy allkeys_lru
# Performance (DragonDB advantage)
proactor_threads 8
tcp_keepalive 300
# Persistence
save_schedule "*:*"
dir /var/lib/dragondb
# Logging
logfile /var/log/dragondb/dragonfly.log
loglevel 1
EOF
# Create systemd service
sudo tee /etc/systemd/system/dragondb.service << 'EOF'
[Unit]
Description=DragonDB Server
After=network.target
[Service]
Type=simple
User=dragondb
Group=dragondb
ExecStart=/usr/local/bin/dragonfly --flagfile=/etc/dragondb/dragonfly.conf
Restart=always
RestartSec=3
KillMode=mixed
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.target
EOF
# Start DragonDB
sudo systemctl daemon-reload
sudo systemctl enable dragondb
sudo systemctl start dragondb
# Test connection
echo "๐งช Testing DragonDB connection..."
redis-cli -a "$DRAGONDB_PASSWORD" ping
echo "โ
DragonDB installation completed!"
echo "๐ Connection: localhost:6379"
echo "๐ Password: $DRAGONDB_PASSWORD"
echo "๐ Performance: Multi-threaded Rust power!"
# Save connection info
echo "$DRAGONDB_PASSWORD" | sudo tee /etc/dragondb/password.txt
sudo chmod 600 /etc/dragondb/password.txt
sudo chown dragondb:dragondb /etc/dragondb/password.txtgraph TB
A[๐ฆ Project Setup] --> B[โ๏ธ Configuration]
B --> C[๐ Connection Layer]
C --> D[๐ ๏ธ Service Layer]
D --> E[๐ฏ Usage Examples]
E --> F[๐ Performance Monitoring]
style A fill:#e3f2fd
style B fill:#fff3e0
style C fill:#e8f5e8
style D fill:#fce4ec
style E fill:#f3e5f5
style F fill:#e0f2f1
# ๐ Initialize new Node.js project
mkdir redis-integration && cd redis-integration
npm init -y
# Install dependencies
npm install redis ioredis @types/redis
npm install -D typescript @types/node ts-node nodemon
# Install additional utilities
npm install dotenv joi compression helmet cors express
npm install -D @types/express @types/cors @types/compression
# Create TypeScript configuration
npx tsc --init --target es2020 --module commonjs --strict --esModuleInteropProject Structure:
redis-integration/
โโโ src/
โ โโโ config/
โ โ โโโ redis.ts # ๐ง Redis configuration
โ โ โโโ database.ts # ๐ Database settings
โ โโโ services/
โ โ โโโ RedisService.ts # ๐ด Single instance service
โ โ โโโ ClusterService.ts # ๐ก Cluster service
โ โ โโโ DragonService.ts # ๐ข DragonDB service
โ โโโ middleware/
โ โ โโโ cache.ts # ๐ Caching middleware
โ โ โโโ rate-limit.ts # ๐ก๏ธ Rate limiting
โ โโโ utils/
โ โ โโโ performance.ts # ๐ Performance monitoring
โ โ โโโ health.ts # ๐ฅ Health checks
โ โโโ examples/
โ โโโ basic-usage.ts # ๐ฏ Basic examples
โ โโโ advanced.ts # ๐ Advanced patterns
โ โโโ real-world.ts # ๐ Real-world scenarios
โโโ .env # ๐ Environment variables
โโโ package.json
โโโ tsconfig.json
// src/config/redis.ts
import { RedisOptions } from "ioredis";
import * as dotenv from "dotenv";
dotenv.config();
export interface DatabaseConfig {
type: "single" | "cluster" | "dragondb";
host?: string;
port?: number;
password?: string;
nodes?: Array<{ host: string; port: number }>;
maxRetriesPerRequest?: number;
retryDelayOnFailover?: number;
enableAutoPipelining?: boolean;
lazyConnect?: boolean;
}
// ๐ด Single Instance Configuration
export const singleInstanceConfig: DatabaseConfig = {
type: "single",
host: process.env.REDIS_HOST || "localhost",
port: parseInt(process.env.REDIS_PORT || "6379"),
password: process.env.REDIS_PASSWORD,
maxRetriesPerRequest: 3,
retryDelayOnFailover: 100,
enableAutoPipelining: true,
lazyConnect: true,
};
// ๐ก Cluster Configuration
export const clusterConfig: DatabaseConfig = {
type: "cluster",
nodes: [
{ host: process.env.REDIS_NODE_1 || "localhost", port: 7000 },
{ host: process.env.REDIS_NODE_2 || "localhost", port: 7001 },
{ host: process.env.REDIS_NODE_3 || "localhost", port: 7002 },
{ host: process.env.REDIS_NODE_4 || "localhost", port: 7003 },
{ host: process.env.REDIS_NODE_5 || "localhost", port: 7004 },
{ host: process.env.REDIS_NODE_6 || "localhost", port: 7005 },
],
password: process.env.REDIS_PASSWORD,
maxRetriesPerRequest: 3,
retryDelayOnFailover: 100,
enableAutoPipelining: true,
};
// ๐ข DragonDB Configuration
export const dragondbConfig: DatabaseConfig = {
type: "dragondb",
host: process.env.DRAGONDB_HOST || "localhost",
port: parseInt(process.env.DRAGONDB_PORT || "6379"),
password: process.env.DRAGONDB_PASSWORD,
maxRetriesPerRequest: 3,
retryDelayOnFailover: 50, // Faster recovery
enableAutoPipelining: true,
lazyConnect: true,
};
// Environment validation
export const validateConfig = (): void => {
const requiredEnvVars = ["REDIS_PASSWORD"];
for (const envVar of requiredEnvVars) {
if (!process.env[envVar]) {
throw new Error(`Missing required environment variable: ${envVar}`);
}
}
};// src/services/ConnectionManager.ts
import Redis, { Cluster } from "ioredis";
import { DatabaseConfig } from "../config/redis";
export class ConnectionManager {
private static instance: ConnectionManager;
private connections: Map<string, Redis | Cluster> = new Map();
public static getInstance(): ConnectionManager {
if (!ConnectionManager.instance) {
ConnectionManager.instance = new ConnectionManager();
}
return ConnectionManager.instance;
}
async createConnection(
name: string,
config: DatabaseConfig,
): Promise<Redis | Cluster> {
if (this.connections.has(name)) {
return this.connections.get(name)!;
}
let client: Redis | Cluster;
switch (config.type) {
case "single":
case "dragondb":
client = new Redis({
host: config.host,
port: config.port,
password: config.password,
maxRetriesPerRequest: config.maxRetriesPerRequest,
retryDelayOnFailover: config.retryDelayOnFailover,
enableAutoPipelining: config.enableAutoPipelining,
lazyConnect: config.lazyConnect,
// Connection pool settings
family: 4,
keepAlive: true,
maxLoadingTimeout: 5000,
// Performance optimizations
enableReadyCheck: true,
maxRetriesPerRequest: 3,
// Event handlers
retryDelayOnClusterDown: 300,
enableOfflineQueue: false,
});
break;
case "cluster":
client = new Redis.Cluster(config.nodes!, {
redisOptions: {
password: config.password,
family: 4,
keepAlive: true,
maxLoadingTimeout: 5000,
},
maxRetriesPerRequest: config.maxRetriesPerRequest,
retryDelayOnFailover: config.retryDelayOnFailover,
enableAutoPipelining: config.enableAutoPipelining,
// Cluster-specific settings
scaleReads: "slave",
enableOfflineQueue: false,
redisOptions: {
password: config.password,
connectTimeout: 10000,
commandTimeout: 5000,
},
});
break;
default:
throw new Error(`Unsupported database type: ${config.type}`);
}
// Add event listeners
this.setupEventListeners(client, name);
// Connect and test
await client.ping();
this.connections.set(name, client);
console.log(`โ
${name} connection established (${config.type})`);
return client;
}
private setupEventListeners(client: Redis | Cluster, name: string): void {
client.on("connect", () => {
console.log(`๐ ${name}: Connected`);
});
client.on("ready", () => {
console.log(`โ
${name}: Ready`);
});
client.on("error", (error) => {
console.error(`โ ${name}: Error -`, error.message);
});
client.on("close", () => {
console.log(`๐ ${name}: Connection closed`);
});
client.on("reconnecting", () => {
console.log(`๐ ${name}: Reconnecting...`);
});
if (client instanceof Redis.Cluster) {
client.on("node error", (error, node) => {
console.error(
`โ ${name}: Node error on ${node.options.host}:${node.options.port} -`,
error.message,
);
});
}
}
getConnection(name: string): Redis | Cluster | undefined {
return this.connections.get(name);
}
async closeConnection(name: string): Promise<void> {
const connection = this.connections.get(name);
if (connection) {
await connection.quit();
this.connections.delete(name);
console.log(`๐ ${name}: Connection closed and removed`);
}
}
async closeAllConnections(): Promise<void> {
const closePromises = Array.from(this.connections.keys()).map((name) =>
this.closeConnection(name),
);
await Promise.all(closePromises);
console.log("๐ All connections closed");
}
getConnectionStatus(): Record<string, string> {
const status: Record<string, string> = {};
this.connections.forEach((connection, name) => {
status[name] = connection.status;
});
return status;
}
}// src/services/UniversalRedisService.ts
import { ConnectionManager } from "./ConnectionManager";
import { DatabaseConfig } from "../config/redis";
import Redis, { Cluster } from "ioredis";
export interface CacheOptions {
ttl?: number;
nx?: boolean; // Set only if not exists
xx?: boolean; // Set only if exists
}
export interface HashField {
[field: string]: string | number;
}
export interface ZSetMember {
score: number;
member: string;
}
export class UniversalRedisService {
private connection: Redis | Cluster;
private connectionManager: ConnectionManager;
constructor(
private config: DatabaseConfig,
private connectionName: string = "default",
) {
this.connectionManager = ConnectionManager.getInstance();
}
async initialize(): Promise<void> {
this.connection = await this.connectionManager.createConnection(
this.connectionName,
this.config,
);
}
// ๐ฏ String Operations
async set(
key: string,
value: any,
options?: CacheOptions,
): Promise<"OK" | null> {
const stringValue =
typeof value === "object" ? JSON.stringify(value) : String(value);
if (options?.ttl) {
return await this.connection.setex(key, options.ttl, stringValue);
}
if (options?.nx) {
return await this.connection.set(key, stringValue, "NX");
}
if (options?.xx) {
return await this.connection.set(key, stringValue, "XX");
}
return await this.connection.set(key, stringValue);
}
async get(key: string): Promise<any> {
const value = await this.connection.get(key);
if (value === null) return null;
try {
return JSON.parse(value);
} catch {
return value;
}
}
async mget(keys: string[]): Promise<(any | null)[]> {
const values = await this.connection.mget(...keys);
return values.map((value) => {
if (value === null) return null;
try {
return JSON.parse(value);
} catch {
return value;
}
});
}
async del(key: string | string[]): Promise<number> {
if (Array.isArray(key)) {
return await this.connection.del(...key);
}
return await this.connection.del(key);
}
async exists(key: string | string[]): Promise<number> {
if (Array.isArray(key)) {
return await this.connection.exists(...key);
}
return await this.connection.exists(key);
}
async expire(key: string, seconds: number): Promise<number> {
return await this.connection.expire(key, seconds);
}
async ttl(key: string): Promise<number> {
return await this.connection.ttl(key);
}
// ๐ Hash Operations
async hset(
key: string,
field: string | HashField,
value?: string | number,
): Promise<number> {
if (typeof field === "object") {
return await this.connection.hset(key, field);
}
return await this.connection.hset(key, field, String(value));
}
async hget(key: string, field: string): Promise<string | null> {
return await this.connection.hget(key, field);
}
async hgetall(key: string): Promise<Record<string, string>> {
return await this.connection.hgetall(key);
}
async hmget(key: string, fields: string[]): Promise<(string | null)[]> {
return await this.connection.hmget(key, ...fields);
}
async hdel(key: string, fields: string | string[]): Promise<number> {
if (Array.isArray(fields)) {
return await this.connection.hdel(key, ...fields);
}
return await this.connection.hdel(key, fields);
}
async hexists(key: string, field: string): Promise<number> {
return await this.connection.hexists(key, field);
}
// ๐ List Operations
async lpush(key: string, values: (string | number)[]): Promise<number> {
return await this.connection.lpush(key, ...values.map(String));
}
async rpush(key: string, values: (string | number)[]): Promise<number> {
return await this.connection.rpush(key, ...values.map(String));
}
async lpop(key: string): Promise<string | null> {
return await this.connection.lpop(key);
}
async rpop(key: string): Promise<string | null> {
return await this.connection.rpop(key);
}
async lrange(key: string, start: number, stop: number): Promise<string[]> {
return await this.connection.lrange(key, start, stop);
}
async llen(key: string): Promise<number> {
return await this.connection.llen(key);
}
// ๐ฏ Set Operations
async sadd(key: string, members: (string | number)[]): Promise<number> {
return await this.connection.sadd(key, ...members.map(String));
}
async smembers(key: string): Promise<string[]> {
return await this.connection.smembers(key);
}
async srem(key: string, members: (string | number)[]): Promise<number> {
return await this.connection.srem(key, ...members.map(String));
}
async sismember(key: string, member: string | number): Promise<number> {
return await this.connection.sismember(key, String(member));
}
async scard(key: string): Promise<number> {
return await this.connection.scard(key);
}
// ๐ Sorted Set Operations
async zadd(key: string, members: ZSetMember[]): Promise<number> {
const args: (string | number)[] = [];
members.forEach((member) => {
args.push(member.score, member.member);
});
return await this.connection.zadd(key, ...args);
}
async zrange(
key: string,
start: number,
stop: number,
withScores?: boolean,
): Promise<string[]> {
if (withScores) {
return await this.connection.zrange(key, start, stop, "WITHSCORES");
}
return await this.connection.zrange(key, start, stop);
}
async zrevrange(
key: string,
start: number,
stop: number,
withScores?: boolean,
): Promise<string[]> {
if (withScores) {
return await this.connection.zrevrange(
key,
start,
stop,
"WITHSCORES",
);
}
return await this.connection.zrevrange(key, start, stop);
}
async zrem(key: string, members: (string | number)[]): Promise<number> {
return await this.connection.zrem(key, ...members.map(String));
}
async zscore(key: string, member: string | number): Promise<string | null> {
return await this.connection.zscore(key, String(member));
}
async zcard(key: string): Promise<number> {
return await this.connection.zcard(key);
}
// ๐ Advanced Operations
async pipeline(commands: Array<[string, ...any[]]>): Promise<any[]> {
const pipeline = this.connection.pipeline();
commands.forEach(([command, ...args]) => {
(pipeline as any)[command](...args);
});
const results = await pipeline.exec();
return results?.map((result) => result[1]) || [];
}
async transaction(commands: Array<[string, ...any[]]>): Promise<any[]> {
const multi = this.connection.multi();
commands.forEach(([command, ...args]) => {
(multi as any)[command](...args);
});
const results = await multi.exec();
return results?.map((result) => result[1]) || [];
}
// ๐ Search Operations (if Redis Stack is available)
async ftSearch(index: string, query: string, options?: any): Promise<any> {
try {
return await (this.connection as any).call(
"FT.SEARCH",
index,
query,
...(options || []),
);
} catch (error) {
console.warn(
"FT.SEARCH not available. Make sure Redis Stack is installed.",
);
throw error;
}
}
// ๐ JSON Operations (if RedisJSON is available)
async jsonSet(key: string, path: string, value: any): Promise<string> {
try {
return await (this.connection as any).call(
"JSON.SET",
key,
path,
JSON.stringify(value),
);
} catch (error) {
console.warn(
"JSON.SET not available. Make sure RedisJSON module is loaded.",
);
throw error;
}
}
async jsonGet(key: string, path?: string): Promise<any> {
try {
const result = await (this.connection as any).call(
"JSON.GET",
key,
path || ".",
);
return JSON.parse(result);
} catch (error) {
console.warn(
"JSON.GET not available. Make sure RedisJSON module is loaded.",
);
throw error;
}
}
// ๐ฅ Health and Monitoring
async ping(): Promise<string> {
return await this.connection.ping();
}
async info(section?: string): Promise<string> {
return await this.connection.info(section);
}
async memory(subcommand: string, ...args: any[]): Promise<any> {
return await this.connection.memory(subcommand, ...args);
}
async dbsize(): Promise<number> {
return await this.connection.dbsize();
}
async flushdb(): Promise<"OK"> {
return await this.connection.flushdb();
}
// Cluster-specific operations
async clusterInfo(): Promise<string> {
if (this.connection instanceof Redis.Cluster) {
return await this.connection.cluster("info");
}
throw new Error("Cluster operations only available in cluster mode");
}
async clusterNodes(): Promise<string> {
if (this.connection instanceof Redis.Cluster) {
return await this.connection.cluster("nodes");
}
throw new Error("Cluster operations only available in cluster mode");
}
// ๐ Connection Management
async disconnect(): Promise<void> {
await this.connectionManager.closeConnection(this.connectionName);
}
getConnectionStatus(): string {
return this.connection.status;
}
// Performance monitoring helper
async measureOperation<T>(
operation: () => Promise<T>,
operationName: string,
): Promise<T> {
const start = Date.now();
try {
const result = await operation();
const duration = Date.now() - start;
console.log(`โก ${operationName}: ${duration}ms`);
return result;
} catch (error) {
const duration = Date.now() - start;
console.error(
`โ ${operationName} failed after ${duration}ms:`,
error,
);
throw error;
}
}
}// src/examples/real-world-scenarios.ts
import { UniversalRedisService } from "../services/UniversalRedisService";
import {
singleInstanceConfig,
clusterConfig,
dragondbConfig,
} from "../config/redis";
// ๐ Factory function to create service based on environment
export function createRedisService(
type?: "single" | "cluster" | "dragondb",
): UniversalRedisService {
const config =
type === "cluster"
? clusterConfig
: type === "dragondb"
? dragondbConfig
: singleInstanceConfig;
return new UniversalRedisService(config, `${type || "single"}-connection`);
}
// ๐ฑ Example 1: Session Management
export class SessionManager {
private redis: UniversalRedisService;
constructor(redisService: UniversalRedisService) {
this.redis = redisService;
}
async createSession(userId: string, sessionData: any): Promise<string> {
const sessionId = `session:${userId}:${Date.now()}`;
const sessionKey = `user_sessions:${sessionId}`;
await this.redis.set(
sessionKey,
{
userId,
createdAt: new Date().toISOString(),
...sessionData,
},
{ ttl: 3600 },
); // 1 hour
// Add to user's active sessions set
await this.redis.sadd(`user:${userId}:sessions`, [sessionId]);
return sessionId;
}
async getSession(sessionId: string): Promise<any> {
return await this.redis.get(`user_sessions:${sessionId}`);
}
async updateSessionActivity(sessionId: string): Promise<void> {
const sessionKey = `user_sessions:${sessionId}`;
const session = await this.redis.get(sessionKey);
if (session) {
session.lastActivity = new Date().toISOString();
await this.redis.set(sessionKey, session, { ttl: 3600 });
}
}
async destroySession(sessionId: string): Promise<void> {
const session = await this.redis.get(`user_sessions:${sessionId}`);
if (session) {
await this.redis.del(`user_sessions:${sessionId}`);
await this.redis.srem(`user:${session.userId}:sessions`, [
sessionId,
]);
}
}
async getUserActiveSessions(userId: string): Promise<string[]> {
return await this.redis.smembers(`user:${userId}:sessions`);
}
}
// ๐ Example 2: E-commerce Cart
export class ShoppingCart {
private redis: UniversalRedisService;
constructor(redisService: UniversalRedisService) {
this.redis = redisService;
}
async addItem(
userId: string,
productId: string,
quantity: number,
price: number,
): Promise<void> {
const cartKey = `cart:${userId}`;
// Add item to cart hash
await this.redis.hset(cartKey, {
[`${productId}:quantity`]: quantity,
[`${productId}:price`]: price,
[`${productId}:addedAt`]: Date.now(),
});
// Set cart expiration (30 days)
await this.redis.expire(cartKey, 30 * 24 * 3600);
// Update cart total
await this.updateCartTotal(userId);
}
async removeItem(userId: string, productId: string): Promise<void> {
const cartKey = `cart:${userId}`;
await this.redis.hdel(cartKey, [
`${productId}:quantity`,
`${productId}:price`,
`${productId}:addedAt`,
]);
await this.updateCartTotal(userId);
}
async getCart(userId: string): Promise<any> {
const cartKey = `cart:${userId}`;
const cartData = await this.redis.hgetall(cartKey);
const cart = { items: [], total: 0 };
const items: any = {};
Object.entries(cartData).forEach(([key, value]) => {
const [productId, field] = key.split(":");
if (!items[productId]) items[productId] = { productId };
if (field === "quantity")
items[productId].quantity = parseInt(value);
else if (field === "price")
items[productId].price = parseFloat(value);
else if (field === "addedAt")
items[productId].addedAt = new Date(parseInt(value));
});
cart.items = Object.values(items);
cart.total = cart.items.reduce(
(sum: number, item: any) => sum + item.quantity * item.price,
0,
);
return cart;
}
private async updateCartTotal(userId: string): Promise<void> {
const cart = await this.getCart(userId);
await this.redis.hset(`cart:${userId}`, { total: cart.total });
}
async clearCart(userId: string): Promise<void> {
await this.redis.del(`cart:${userId}`);
}
}
// ๐ Example 3: Real-time Analytics
export class AnalyticsService {
private redis: UniversalRedisService;
constructor(redisService: UniversalRedisService) {
this.redis = redisService;
}
async trackPageView(pageUrl: string, userId?: string): Promise<void> {
const today = new Date().toISOString().split("T")[0];
const hour = new Date().getHours();
// Daily page views
await this.redis.pipeline([
["incr", `analytics:pageviews:${today}`],
["incr", `analytics:pageviews:${today}:${hour}`],
["incr", `analytics:page:${pageUrl}:${today}`],
[
"zadd",
`analytics:popular_pages:${today}`,
[{ score: Date.now(), member: pageUrl }],
],
]);
// User tracking
if (userId) {
await this.redis.sadd(`analytics:daily_users:${today}`, [userId]);
await this.redis.lpush(`analytics:user:${userId}:history`, [
pageUrl,
]);
}
// Real-time stats (expire in 1 hour)
await this.redis.set(`analytics:realtime:current_hour`, hour, {
ttl: 3600,
});
}
async getDailyStats(date: string): Promise<any> {
const [pageViews, uniqueUsers, popularPages] = await Promise.all([
this.redis.get(`analytics:pageviews:${date}`),
this.redis.scard(`analytics:daily_users:${date}`),
this.redis.zrevrange(`analytics:popular_pages:${date}`, 0, 9, true),
]);
return {
date,
pageViews: parseInt(pageViews || "0"),
uniqueUsers,
popularPages: this.formatZSetResults(popularPages),
};
}
async getHourlyStats(date: string): Promise<number[]> {
const hourlyStats = [];
for (let hour = 0; hour < 24; hour++) {
const views = await this.redis.get(
`analytics:pageviews:${date}:${hour}`,
);
hourlyStats.push(parseInt(views || "0"));
}
return hourlyStats;
}
private formatZSetResults(
results: string[],
): Array<{ page: string; score: number }> {
const formatted = [];
for (let i = 0; i < results.length; i += 2) {
formatted.push({
page: results[i],
score: parseFloat(results[i + 1]),
});
}
return formatted;
}
}
// ๐ฎ Example 4: Gaming Leaderboard
export class LeaderboardService {
private redis: UniversalRedisService;
constructor(redisService: UniversalRedisService) {
this.redis = redisService;
}
async addScore(
gameId: string,
playerId: string,
score: number,
): Promise<void> {
const leaderboardKey = `leaderboard:${gameId}`;
// Add to sorted set
await this.redis.zadd(leaderboardKey, [{ score, member: playerId }]);
// Store player info
await this.redis.hset(`player:${playerId}`, {
lastScore: score,
lastPlayed: Date.now(),
[`game:${gameId}:bestScore`]: score,
});
// Update daily leaderboard
const today = new Date().toISOString().split("T")[0];
await this.redis.zadd(`leaderboard:${gameId}:daily:${today}`, [
{ score, member: playerId },
]);
}
async getTopPlayers(gameId: string, limit: number = 10): Promise<any[]> {
const results = await this.redis.zrevrange(
`leaderboard:${gameId}`,
0,
limit - 1,
true,
);
const leaderboard = [];
for (let i = 0; i < results.length; i += 2) {
const playerId = results[i];
const score = parseInt(results[i + 1]);
const playerInfo = await this.redis.hgetall(`player:${playerId}`);
leaderboard.push({
rank: Math.floor(i / 2) + 1,
playerId,
score,
...playerInfo,
});
}
return leaderboard;
}
async getPlayerRank(
gameId: string,
playerId: string,
): Promise<number | null> {
const rank = await this.redis.zrevrank(
`leaderboard:${gameId}`,
playerId,
);
return rank !== null ? rank + 1 : null;
}
async getPlayerScore(
gameId: string,
playerId: string,
): Promise<number | null> {
const score = await this.redis.zscore(
`leaderboard:${gameId}`,
playerId,
);
return score ? parseFloat(score) : null;
}
}
// ๐ Example 5: Notification System
export class NotificationService {
private redis: UniversalRedisService;
constructor(redisService: UniversalRedisService) {
this.redis = redisService;
}
async sendNotification(
userId: string,
notification: {
id: string;
title: string;
message: string;
type: "info" | "warning" | "error" | "success";
data?: any;
},
): Promise<void> {
const notificationKey = `notifications:${userId}`;
const notificationData = {
...notification,
timestamp: Date.now(),
read: false,
};
// Add to user's notification list
await this.redis.lpush(notificationKey, [
JSON.stringify(notificationData),
]);
// Keep only last 100 notifications
await this.redis.ltrim(notificationKey, 0, 99);
// Add to unread count
await this.redis.incr(`notifications:${userId}:unread`);
// Set expiration for cleanup (30 days)
await this.redis.expire(notificationKey, 30 * 24 * 3600);
}
async getNotifications(
userId: string,
limit: number = 20,
offset: number = 0,
): Promise<any[]> {
const notifications = await this.redis.lrange(
`notifications:${userId}`,
offset,
offset + limit - 1,
);
return notifications.map((n) => JSON.parse(n));
}
async markAsRead(userId: string, notificationId: string): Promise<void> {
const notifications = await this.getNotifications(userId, 100);
const updatedNotifications = notifications.map((n) => {
if (n.id === notificationId && !n.read) {
n.read = true;
this.redis.decr(`notifications:${userId}:unread`);
}
return JSON.stringify(n);
});
// Replace the list
await this.redis.del(`notifications:${userId}`);
if (updatedNotifications.length > 0) {
await this.redis.lpush(
`notifications:${userId}`,
updatedNotifications,
);
}
}
async getUnreadCount(userId: string): Promise<number> {
const count = await this.redis.get(`notifications:${userId}:unread`);
return parseInt(count || "0");
}
async markAllAsRead(userId: string): Promise<void> {
await this.redis.set(`notifications:${userId}:unread`, 0);
}
}
// ๐ Usage Examples
async function demonstrateUsage() {
// Initialize services
const redisService = createRedisService("single"); // or 'cluster' or 'dragondb'
await redisService.initialize();
const sessionManager = new SessionManager(redisService);
const cart = new ShoppingCart(redisService);
const analytics = new AnalyticsService(redisService);
const leaderboard = new LeaderboardService(redisService);
const notifications = new NotificationService(redisService);
try {
// 1. Session management
console.log("๐ Creating user session...");
const sessionId = await sessionManager.createSession("user123", {
email: "user@example.com",
role: "customer",
});
console.log(`Session created: ${sessionId}`);
// 2. Shopping cart
console.log("๐ Adding items to cart...");
await cart.addItem("user123", "product1", 2, 29.99);
await cart.addItem("user123", "product2", 1, 49.99);
const userCart = await cart.getCart("user123");
console.log("Cart contents:", userCart);
// 3. Analytics
console.log("๐ Tracking page views...");
await analytics.trackPageView("/product/123", "user123");
await analytics.trackPageView("/checkout", "user123");
const stats = await analytics.getDailyStats(
new Date().toISOString().split("T")[0],
);
console.log("Daily stats:", stats);
// 4. Leaderboard
console.log("๐ฎ Adding game scores...");
await leaderboard.addScore("game1", "player1", 1500);
await leaderboard.addScore("game1", "player2", 1200);
const topPlayers = await leaderboard.getTopPlayers("game1", 5);
console.log("Top players:", topPlayers);
// 5. Notifications
console.log("๐ Sending notifications...");
await notifications.sendNotification("user123", {
id: "notif1",
title: "Welcome!",
message: "Thanks for joining our platform",
type: "success",
});
const userNotifications = await notifications.getNotifications(
"user123",
);
console.log("User notifications:", userNotifications);
} catch (error) {
console.error("โ Error in demonstration:", error);
} finally {
await redisService.disconnect();
}
}
// Export for use
export {
SessionManager,
ShoppingCart,
AnalyticsService,
LeaderboardService,
NotificationService,
demonstrateUsage,
};// src/middleware/cache.ts
import { Request, Response, NextFunction } from "express";
import { UniversalRedisService } from "../services/UniversalRedisService";
import crypto from "crypto";
interface CacheOptions {
ttl?: number;
keyGenerator?: (req: Request) => string;
condition?: (req: Request) => boolean;
skipCache?: (req: Request, res: Response) => boolean;
}
export function createCacheMiddleware(
redisService: UniversalRedisService,
options: CacheOptions = {},
) {
const defaultOptions = {
ttl: 300, // 5 minutes
keyGenerator: (req: Request) => {
const url = req.originalUrl || req.url;
const method = req.method;
const hash = crypto
.createHash("md5")
.update(`${method}:${url}`)
.digest("hex");
return `cache:${hash}`;
},
condition: () => true,
skipCache: (req: Request) => req.method !== "GET",
};
const config = { ...defaultOptions, ...options };
return async (req: Request, res: Response, next: NextFunction) => {
// Skip cache for non-GET requests or based on condition
if (config.skipCache(req, res) || !config.condition(req)) {
return next();
}
const cacheKey = config.keyGenerator(req);
try {
// Try to get from cache
const cachedData = await redisService.get(cacheKey);
if (cachedData) {
console.log(`๐ฏ Cache hit: ${cacheKey}`);
return res.json(cachedData);
}
// Store the original json method
const originalJson = res.json.bind(res);
// Override the json method to cache the response
res.json = function (data: any) {
// Cache the response
redisService
.set(cacheKey, data, { ttl: config.ttl })
.catch((err) => {
console.error("โ Cache write error:", err);
});
console.log(`๐พ Cache miss, stored: ${cacheKey}`);
return originalJson(data);
};
next();
} catch (error) {
console.error("โ Cache middleware error:", error);
next(); // Continue without cache on error
}
};
}
// src/middleware/rate-limit.ts
export function createRateLimitMiddleware(
redisService: UniversalRedisService,
options: {
windowMs: number;
maxRequests: number;
keyGenerator?: (req: Request) => string;
onLimitReached?: (req: Request, res: Response) => void;
},
) {
const defaultKeyGenerator = (req: Request) => {
return `rate_limit:${req.ip}:${Math.floor(
Date.now() / options.windowMs,
)}`;
};
const keyGenerator = options.keyGenerator || defaultKeyGenerator;
return async (req: Request, res: Response, next: NextFunction) => {
const key = keyGenerator(req);
try {
const current = await redisService.incr(key);
if (current === 1) {
// Set expiration for new key
await redisService.expire(
key,
Math.ceil(options.windowMs / 1000),
);
}
// Add headers
res.set({
"X-RateLimit-Limit": options.maxRequests.toString(),
"X-RateLimit-Remaining": Math.max(
0,
options.maxRequests - current,
).toString(),
"X-RateLimit-Reset": new Date(
Date.now() + options.windowMs,
).toISOString(),
});
if (current > options.maxRequests) {
if (options.onLimitReached) {
options.onLimitReached(req, res);
}
return res.status(429).json({
error: "Too Many Requests",
message: `Rate limit exceeded. Try again in ${Math.ceil(
options.windowMs / 1000,
)} seconds.`,
});
}
next();
} catch (error) {
console.error("โ Rate limiting error:", error);
next(); // Continue without rate limiting on error
}
};
}
// src/app.ts - Complete Express.js Application
import express from "express";
import cors from "cors";
import helmet from "helmet";
import compression from "compression";
import { createRedisService } from "./examples/real-world-scenarios";
import {
createCacheMiddleware,
createRateLimitMiddleware,
} from "./middleware/cache";
import {
SessionManager,
ShoppingCart,
AnalyticsService,
} from "./examples/real-world-scenarios";
const app = express();
// Middleware
app.use(helmet());
app.use(cors());
app.use(compression());
app.use(express.json());
// Initialize Redis service
const redisService = createRedisService(
(process.env.REDIS_TYPE as any) || "single",
);
// Initialize services
let sessionManager: SessionManager;
let cart: ShoppingCart;
let analytics: AnalyticsService;
// Cache middleware
const cacheMiddleware = createCacheMiddleware(redisService, {
ttl: 600, // 10 minutes
condition: (req) => req.url.startsWith("/api/public/"),
});
// Rate limiting middleware
const rateLimitMiddleware = createRateLimitMiddleware(redisService, {
windowMs: 15 * 60 * 1000, // 15 minutes
maxRequests: 100,
keyGenerator: (req) => `rate_limit:${req.ip}`,
});
// Apply rate limiting globally
app.use(rateLimitMiddleware);
// Health check endpoint
app.get("/health", async (req, res) => {
try {
const ping = await redisService.ping();
const status = redisService.getConnectionStatus();
res.json({
status: "healthy",
redis: {
ping,
status,
type: process.env.REDIS_TYPE || "single",
},
timestamp: new Date().toISOString(),
});
} catch (error) {
res.status(500).json({
status: "unhealthy",
error: error instanceof Error ? error.message : "Unknown error",
});
}
});
// Public API endpoints (cached)
app.get("/api/public/stats", cacheMiddleware, async (req, res) => {
try {
const today = new Date().toISOString().split("T")[0];
const stats = await analytics.getDailyStats(today);
res.json(stats);
} catch (error) {
res.status(500).json({ error: "Failed to fetch stats" });
}
});
// User session endpoints
app.post("/api/sessions", async (req, res) => {
try {
const { userId, sessionData } = req.body;
const sessionId = await sessionManager.createSession(
userId,
sessionData,
);
res.json({ sessionId });
} catch (error) {
res.status(500).json({ error: "Failed to create session" });
}
});
app.get("/api/sessions/:sessionId", async (req, res) => {
try {
const session = await sessionManager.getSession(req.params.sessionId);
if (!session) {
return res.status(404).json({ error: "Session not found" });
}
res.json(session);
} catch (error) {
res.status(500).json({ error: "Failed to fetch session" });
}
});
// Shopping cart endpoints
app.post("/api/cart/:userId/items", async (req, res) => {
try {
const { productId, quantity, price } = req.body;
await cart.addItem(req.params.userId, productId, quantity, price);
res.json({ success: true });
} catch (error) {
res.status(500).json({ error: "Failed to add item to cart" });
}
});
app.get("/api/cart/:userId", async (req, res) => {
try {
const userCart = await cart.getCart(req.params.userId);
res.json(userCart);
} catch (error) {
res.status(500).json({ error: "Failed to fetch cart" });
}
});
// Analytics endpoints
app.post("/api/analytics/pageview", async (req, res) => {
try {
const { pageUrl, userId } = req.body;
await analytics.trackPageView(pageUrl, userId);
res.json({ success: true });
} catch (error) {
res.status(500).json({ error: "Failed to track page view" });
}
});
// Error handling middleware
app.use(
(
error: any,
req: express.Request,
res: express.Response,
next: express.NextFunction,
) => {
console.error("โ Unhandled error:", error);
res.status(500).json({
error: "Internal Server Error",
message:
process.env.NODE_ENV === "development"
? error.message
: "Something went wrong",
});
},
);
// Initialize and start server
async function startServer() {
try {
console.log("๐ Initializing Redis connection...");
await redisService.initialize();
// Initialize services
sessionManager = new SessionManager(redisService);
cart = new ShoppingCart(redisService);
analytics = new AnalyticsService(redisService);
console.log("โ
Services initialized");
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`๐ Server running on port ${PORT}`);
console.log(`๐ Health check: http://localhost:${PORT}/health`);
});
} catch (error) {
console.error("โ Failed to start server:", error);
process.exit(1);
}
}
// Graceful shutdown
process.on("SIGINT", async () => {
console.log("๐ Shutting down gracefully...");
await redisService.disconnect();
process.exit(0);
});
process.on("SIGTERM", async () => {
console.log("๐ Received SIGTERM, shutting down...");
await redisService.disconnect();
process.exit(0);
});
// Start the server
if (require.main === module) {
startServer();
}
export { app, startServer };graph TB
A[๐ Monitoring Strategy] --> B[๐ Health Checks]
A --> C[๐ Performance Metrics]
A --> D[๐จ Alerting]
A --> E[๐ Logging]
B --> B1[Connection Status]
B --> B2[Memory Usage]
B --> B3[Cluster Health]
C --> C1[Operations/sec]
C --> C2[Latency]
C --> C3[Hit Rate]
C --> C4[CPU Usage]
D --> D1[Threshold Alerts]
D --> D2[Anomaly Detection]
D --> D3[Predictive Alerts]
E --> E1[Application Logs]
E --> E2[Redis Logs]
E --> E3[System Logs]
style A fill:#e1f5fe
style B fill:#e8f5e8
style C fill:#fff3e0
style D fill:#ffebee
style E fill:#f3e5f5
// src/monitoring/HealthMonitor.ts
import { UniversalRedisService } from "../services/UniversalRedisService";
import { EventEmitter } from "events";
export interface HealthMetrics {
timestamp: number;
connectionStatus: string;
memory: {
used: number;
max: number;
percentage: number;
};
performance: {
operationsPerSecond: number;
avgLatency: number;
hitRate: number;
};
cluster?: {
state: string;
connectedNodes: number;
totalNodes: number;
};
system: {
cpuUsage: number;
freeMemory: number;
};
}
export interface AlertRule {
name: string;
condition: (metrics: HealthMetrics) => boolean;
severity: "low" | "medium" | "high" | "critical";
message: (metrics: HealthMetrics) => string;
cooldown: number; // seconds
}
export class HealthMonitor extends EventEmitter {
private redisService: UniversalRedisService;
private metricsHistory: HealthMetrics[] = [];
private alertRules: AlertRule[] = [];
private lastAlerts: Map<string, number> = new Map();
private monitoringInterval?: NodeJS.Timeout;
constructor(redisService: UniversalRedisService) {
super();
this.redisService = redisService;
this.setupDefaultAlertRules();
}
private setupDefaultAlertRules(): void {
this.alertRules = [
{
name: "high_memory_usage",
condition: (metrics) => metrics.memory.percentage > 85,
severity: "high",
message: (metrics) =>
`Memory usage critical: ${metrics.memory.percentage.toFixed(
1,
)}%`,
cooldown: 300, // 5 minutes
},
{
name: "connection_lost",
condition: (metrics) => metrics.connectionStatus !== "ready",
severity: "critical",
message: (metrics) =>
`Redis connection lost: ${metrics.connectionStatus}`,
cooldown: 60, // 1 minute
},
{
name: "high_latency",
condition: (metrics) => metrics.performance.avgLatency > 10,
severity: "medium",
message: (metrics) =>
`High latency detected: ${metrics.performance.avgLatency}ms`,
cooldown: 300,
},
{
name: "low_hit_rate",
condition: (metrics) => metrics.performance.hitRate < 80,
severity: "medium",
message: (metrics) =>
`Low cache hit rate: ${metrics.performance.hitRate.toFixed(
1,
)}%`,
cooldown: 600, // 10 minutes
},
{
name: "cluster_degraded",
condition: (metrics) =>
metrics.cluster &&
metrics.cluster.connectedNodes < metrics.cluster.totalNodes,
severity: "high",
message: (metrics) =>
`Cluster degraded: ${metrics.cluster?.connectedNodes}/${metrics.cluster?.totalNodes} nodes`,
cooldown: 180, // 3 minutes
},
];
}
async collectMetrics(): Promise<HealthMetrics> {
const start = Date.now();
try {
// Basic connection test
await this.redisService.ping();
const connectionStatus = this.redisService.getConnectionStatus();
// Memory information
const memoryInfo = await this.redisService.info("memory");
const memoryLines = memoryInfo.split("\r\n");
const usedMemory = this.parseInfoValue(memoryLines, "used_memory");
const maxMemory =
this.parseInfoValue(memoryLines, "maxmemory") ||
32 * 1024 * 1024 * 1024; // Default 32GB
// Performance metrics
const statsInfo = await this.redisService.info("stats");
const statsLines = statsInfo.split("\r\n");
const totalConnections =
this.parseInfoValue(statsLines, "total_connections_received") ||
0;
const totalCommands =
this.parseInfoValue(statsLines, "total_commands_processed") ||
0;
const cacheHits =
this.parseInfoValue(statsLines, "keyspace_hits") || 0;
const cacheMisses =
this.parseInfoValue(statsLines, "keyspace_misses") || 0;
const latency = Date.now() - start;
const hitRate =
cacheHits + cacheMisses > 0
? (cacheHits / (cacheHits + cacheMisses)) * 100
: 100;
// Calculate operations per second (approximation)
const previousMetrics =
this.metricsHistory[this.metricsHistory.length - 1];
let operationsPerSecond = 0;
if (previousMetrics) {
const timeDiff =
(Date.now() - previousMetrics.timestamp) / 1000;
const commandsDiff =
totalCommands - (previousMetrics as any).totalCommands;
operationsPerSecond =
timeDiff > 0 ? commandsDiff / timeDiff : 0;
}
// Cluster information (if applicable)
let clusterInfo;
try {
const clusterState = await this.redisService.clusterInfo();
const clusterNodes = await this.redisService.clusterNodes();
clusterInfo = {
state: this.parseClusterState(clusterState),
connectedNodes: this.countConnectedNodes(clusterNodes),
totalNodes: this.countTotalNodes(clusterNodes),
};
} catch (error) {
// Single instance - no cluster info
}
// System metrics (simplified)
const systemInfo = {
cpuUsage: 0, // Would need system monitoring
freeMemory: process.memoryUsage().heapUsed,
};
const metrics: HealthMetrics = {
timestamp: Date.now(),
connectionStatus,
memory: {
used: usedMemory,
max: maxMemory,
percentage: (usedMemory / maxMemory) * 100,
},
performance: {
operationsPerSecond,
avgLatency: latency,
hitRate,
},
cluster: clusterInfo,
system: systemInfo,
};
// Store for historical analysis
(metrics as any).totalCommands = totalCommands;
return metrics;
} catch (error) {
console.error("โ Error collecting metrics:", error);
throw error;
}
}
private parseInfoValue(lines: string[], key: string): number {
const line = lines.find((l) => l.startsWith(`${key}:`));
return line ? parseInt(line.split(":")[1]) : 0;
}
private parseClusterState(clusterInfo: string): string {
const stateLine = clusterInfo
.split("\r\n")
.find((l) => l.startsWith("cluster_state:"));
return stateLine ? stateLine.split(":")[1] : "unknown";
}
private countConnectedNodes(nodesInfo: string): number {
return nodesInfo
.split("\n")
.filter(
(line) =>
line.includes("connected") &&
!line.includes("disconnected"),
).length;
}
private countTotalNodes(nodesInfo: string): number {
return nodesInfo.split("\n").filter((line) => line.trim()).length;
}
startMonitoring(intervalMs: number = 30000): void {
console.log(
`๐ Starting health monitoring (interval: ${intervalMs}ms)`,
);
this.monitoringInterval = setInterval(async () => {
try {
const metrics = await this.collectMetrics();
this.metricsHistory.push(metrics);
// Keep only last 100 metrics (for memory efficiency)
if (this.metricsHistory.length > 100) {
this.metricsHistory.shift();
}
// Check alert rules
this.checkAlertRules(metrics);
// Emit metrics event
this.emit("metrics", metrics);
console.log(
`๐ Health check: ${
metrics.connectionStatus
}, Memory: ${metrics.memory.percentage.toFixed(
1,
)}%, Latency: ${metrics.performance.avgLatency}ms`,
);
} catch (error) {
console.error("โ Health monitoring error:", error);
this.emit("error", error);
}
}, intervalMs);
}
stopMonitoring(): void {
if (this.monitoringInterval) {
clearInterval(this.monitoringInterval);
this.monitoringInterval = undefined;
console.log("๐ Health monitoring stopped");
}
}
private checkAlertRules(metrics: HealthMetrics): void {
const now = Date.now();
this.alertRules.forEach((rule) => {
if (rule.condition(metrics)) {
const lastAlert = this.lastAlerts.get(rule.name) || 0;
const cooldownMs = rule.cooldown * 1000;
if (now - lastAlert > cooldownMs) {
const alert = {
rule: rule.name,
severity: rule.severity,
message: rule.message(metrics),
timestamp: now,
metrics,
};
this.emit("alert", alert);
this.lastAlerts.set(rule.name, now);
console.warn(
`๐จ ALERT [${rule.severity.toUpperCase()}]: ${
alert.message
}`,
);
}
}
});
}
getMetricsHistory(): HealthMetrics[] {
return [...this.metricsHistory];
}
getLatestMetrics(): HealthMetrics | null {
return this.metricsHistory.length > 0
? this.metricsHistory[this.metricsHistory.length - 1]
: null;
}
addAlertRule(rule: AlertRule): void {
this.alertRules.push(rule);
}
removeAlertRule(name: string): void {
this.alertRules = this.alertRules.filter((rule) => rule.name !== name);
}
// Performance analysis methods
calculateTrends(minutes: number = 30): {
memoryTrend: "increasing" | "decreasing" | "stable";
latencyTrend: "increasing" | "decreasing" | "stable";
operationsTrend: "increasing" | "decreasing" | "stable";
} {
const cutoff = Date.now() - minutes * 60 * 1000;
const recentMetrics = this.metricsHistory.filter(
(m) => m.timestamp > cutoff,
);
if (recentMetrics.length < 2) {
return {
memoryTrend: "stable",
latencyTrend: "stable",
operationsTrend: "stable",
};
}
const first = recentMetrics[0];
const last = recentMetrics[recentMetrics.length - 1];
return {
memoryTrend: this.calculateTrend(
first.memory.percentage,
last.memory.percentage,
),
latencyTrend: this.calculateTrend(
first.performance.avgLatency,
last.performance.avgLatency,
),
operationsTrend: this.calculateTrend(
first.performance.operationsPerSecond,
last.performance.operationsPerSecond,
true,
),
};
}
private calculateTrend(
first: number,
last: number,
inverse: boolean = false,
): "increasing" | "decreasing" | "stable" {
const threshold = 0.05; // 5% change threshold
const change = (last - first) / first;
if (Math.abs(change) < threshold) return "stable";
const isIncreasing = change > 0;
if (inverse) {
return isIncreasing ? "increasing" : "decreasing";
} else {
return isIncreasing ? "increasing" : "decreasing";
}
}
}// src/monitoring/PerformanceDashboard.ts
import { HealthMonitor, HealthMetrics } from "./HealthMonitor";
import express from "express";
export class PerformanceDashboard {
private healthMonitor: HealthMonitor;
private app: express.Application;
constructor(healthMonitor: HealthMonitor) {
this.healthMonitor = healthMonitor;
this.app = express();
this.setupRoutes();
}
private setupRoutes(): void {
this.app.use(express.static("public")); // Serve static files
// Real-time metrics endpoint
this.app.get("/api/metrics/current", (req, res) => {
const metrics = this.healthMonitor.getLatestMetrics();
res.json(metrics);
});
// Historical metrics endpoint
this.app.get("/api/metrics/history", (req, res) => {
const minutes = parseInt(req.query.minutes as string) || 30;
const cutoff = Date.now() - minutes * 60 * 1000;
const history = this.healthMonitor
.getMetricsHistory()
.filter((m) => m.timestamp > cutoff);
res.json(history);
});
// Performance trends endpoint
this.app.get("/api/metrics/trends", (req, res) => {
const minutes = parseInt(req.query.minutes as string) || 30;
const trends = this.healthMonitor.calculateTrends(minutes);
res.json(trends);
});
// Health summary endpoint
this.app.get("/api/health/summary", (req, res) => {
const latest = this.healthMonitor.getLatestMetrics();
if (!latest) {
return res.status(503).json({ error: "No metrics available" });
}
const summary = {
status: this.getOverallStatus(latest),
uptime: process.uptime(),
metrics: {
memory: `${latest.memory.percentage.toFixed(1)}%`,
latency: `${latest.performance.avgLatency}ms`,
hitRate: `${latest.performance.hitRate.toFixed(1)}%`,
operations: `${latest.performance.operationsPerSecond.toFixed(
0,
)}/sec`,
},
cluster: latest.cluster
? {
state: latest.cluster.state,
nodes: `${latest.cluster.connectedNodes}/${latest.cluster.totalNodes}`,
}
: null,
};
res.json(summary);
});
// Detailed HTML dashboard
this.app.get("/dashboard", (req, res) => {
res.send(this.generateDashboardHTML());
});
}
private getOverallStatus(
metrics: HealthMetrics,
): "healthy" | "warning" | "critical" {
if (metrics.connectionStatus !== "ready") return "critical";
if (metrics.memory.percentage > 90) return "critical";
if (metrics.performance.avgLatency > 100) return "warning";
if (metrics.performance.hitRate < 70) return "warning";
if (metrics.cluster && metrics.cluster.state !== "ok") return "warning";
return "healthy";
}
private generateDashboardHTML(): string {
return `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>๐ Redis Performance Dashboard</title>
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<style>
body {
font-family: 'Segoe UI', system-ui, sans-serif;
margin: 0;
padding: 20px;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: #333;
}
.container {
max-width: 1200px;
margin: 0 auto;
background: white;
border-radius: 12px;
padding: 30px;
box-shadow: 0 20px 40px rgba(0,0,0,0.1);
}
.header {
text-align: center;
margin-bottom: 40px;
border-bottom: 3px solid #667eea;
padding-bottom: 20px;
}
.header h1 {
color: #667eea;
margin: 0;
font-size: 2.5em;
}
.metrics-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 20px;
margin-bottom: 40px;
}
.metric-card {
background: linear-gradient(135deg, #f093fb 0%, #f5576c 100%);
color: white;
padding: 25px;
border-radius: 12px;
text-align: center;
box-shadow: 0 8px 16px rgba(0,0,0,0.1);
transition: transform 0.3s ease;
}
.metric-card:hover {
transform: translateY(-5px);
}
.metric-value {
font-size: 2.5em;
font-weight: bold;
margin-bottom: 10px;
}
.metric-label {
font-size: 1.1em;
opacity: 0.9;
}
.chart-container {
background: #f8f9fa;
padding: 25px;
border-radius: 12px;
margin-bottom: 30px;
box-shadow: 0 4px 8px rgba(0,0,0,0.05);
}
.status-indicator {
display: inline-block;
width: 12px;
height: 12px;
border-radius: 50%;
margin-right: 8px;
}
.status-healthy { background: #28a745; }
.status-warning { background: #ffc107; }
.status-critical { background: #dc3545; }
.refresh-btn {
background: #667eea;
color: white;
border: none;
padding: 12px 24px;
border-radius: 8px;
cursor: pointer;
font-size: 16px;
transition: background 0.3s ease;
}
.refresh-btn:hover {
background: #5a6fd8;
}
</style>
</head>
<body>
<div class="container">
<div class="header">
<h1>๐ Redis Performance Dashboard</h1>
<button class="refresh-btn" onclick="refreshData()">๐ Refresh</button>
<div id="status" style="margin-top: 15px; font-size: 1.2em;">
<span class="status-indicator status-healthy"></span>
Loading...
</div>
</div>
<div class="metrics-grid" id="metricsGrid">
<!-- Metrics will be populated by JavaScript -->
</div>
<div class="chart-container">
<h3>๐ Memory Usage Over Time</h3>
<canvas id="memoryChart" height="100"></canvas>
</div>
<div class="chart-container">
<h3>โก Latency and Operations/sec</h3>
<canvas id="performanceChart" height="100"></canvas>
</div>
</div>
<script>
let memoryChart, performanceChart;
async function fetchMetrics() {
try {
const [current, history] = await Promise.all([
fetch('/api/metrics/current').then(r => r.json()),
fetch('/api/metrics/history?minutes=60').then(r => r.json())
]);
updateMetricsDisplay(current);
updateCharts(history);
document.getElementById('status').innerHTML =
\`<span class="status-indicator status-healthy"></span>
Last updated: \${new Date().toLocaleTimeString()}\`;
} catch (error) {
console.error('Error fetching metrics:', error);
document.getElementById('status').innerHTML =
\`<span class="status-indicator status-critical"></span>
Error loading data\`;
}
}
function updateMetricsDisplay(metrics) {
const grid = document.getElementById('metricsGrid');
grid.innerHTML = \`
<div class="metric-card">
<div class="metric-value">\${metrics.memory.percentage.toFixed(1)}%</div>
<div class="metric-label">๐พ Memory Usage</div>
</div>
<div class="metric-card">
<div class="metric-value">\${metrics.performance.avgLatency}ms</div>
<div class="metric-label">โก Latency</div>
</div>
<div class="metric-card">
<div class="metric-value">\${metrics.performance.hitRate.toFixed(1)}%</div>
<div class="metric-label">๐ฏ Hit Rate</div>
</div>
<div class="metric-card">
<div class="metric-value">\${metrics.performance.operationsPerSecond.toFixed(0)}</div>
<div class="metric-label">๐ Ops/sec</div>
</div>
\`;
}
function updateCharts(history) {
const labels = history.map(h => new Date(h.timestamp).toLocaleTimeString());
const memoryData = history.map(h => h.memory.percentage);
const latencyData = history.map(h => h.performance.avgLatency);
const opsData = history.map(h => h.performance.operationsPerSecond);
// Memory Chart
if (memoryChart) memoryChart.destroy();
memoryChart = new Chart(document.getElementById('memoryChart'), {
type: 'line',
data: {
labels: labels,
datasets: [{
label: 'Memory Usage (%)',
data: memoryData,
borderColor: '#f5576c',
backgroundColor: 'rgba(245, 87, 108, 0.1)',
tension: 0.4
}]
},
options: {
responsive: true,
scales: {
y: { beginAtZero: true, max: 100 }
}
}
});
// Performance Chart
if (performanceChart) performanceChart.destroy();
performanceChart = new Chart(document.getElementById('performanceChart'), {
type: 'line',
data: {
labels: labels,
datasets: [{
label: 'Latency (ms)',
data: latencyData,
borderColor: '#667eea',
backgroundColor: 'rgba(102, 126, 234, 0.1)',
yAxisID: 'y'
}, {
label: 'Operations/sec',
data: opsData,
borderColor: '#28a745',
backgroundColor: 'rgba(40, 167, 69, 0.1)',
yAxisID: 'y1'
}]
},
options: {
responsive: true,
scales: {
y: { type: 'linear', position: 'left' },
y1: { type: 'linear', position: 'right', grid: { drawOnChartArea: false } }
}
}
});
}
function refreshData() {
fetchMetrics();
}
// Initial load and auto-refresh
fetchMetrics();
setInterval(fetchMetrics, 30000); // Refresh every 30 seconds
</script>
</body>
</html>
`;
}
start(port: number = 3001): void {
this.app.listen(port, () => {
console.log(
`๐ Performance dashboard available at: http://localhost:${port}/dashboard`,
);
});
}
}// src/monitoring/AlertingSystem.ts
import { EventEmitter } from "events";
import { HealthMonitor } from "./HealthMonitor";
export interface AlertChannel {
name: string;
send(alert: Alert): Promise<void>;
}
export interface Alert {
id: string;
rule: string;
severity: "low" | "medium" | "high" | "critical";
message: string;
timestamp: number;
acknowledged: boolean;
resolvedAt?: number;
}
export class AlertingSystem extends EventEmitter {
private alerts: Map<string, Alert> = new Map();
private channels: AlertChannel[] = [];
private healthMonitor: HealthMonitor;
constructor(healthMonitor: HealthMonitor) {
super();
this.healthMonitor = healthMonitor;
this.setupEventListeners();
}
private setupEventListeners(): void {
this.healthMonitor.on("alert", async (alertData) => {
const alert: Alert = {
id: `${alertData.rule}_${alertData.timestamp}`,
rule: alertData.rule,
severity: alertData.severity,
message: alertData.message,
timestamp: alertData.timestamp,
acknowledged: false,
};
this.alerts.set(alert.id, alert);
// Send through all channels
await this.sendAlert(alert);
this.emit("alert_created", alert);
});
}
addChannel(channel: AlertChannel): void {
this.channels.push(channel);
console.log(`๐ข Added alert channel: ${channel.name}`);
}
private async sendAlert(alert: Alert): Promise<void> {
const promises = this.channels.map(async (channel) => {
try {
await channel.send(alert);
console.log(
`โ
Alert sent via ${channel.name}: ${alert.message}`,
);
} catch (error) {
console.error(
`โ Failed to send alert via ${channel.name}:`,
error,
);
}
});
await Promise.allSettled(promises);
}
acknowledgeAlert(alertId: string): boolean {
const alert = this.alerts.get(alertId);
if (alert) {
alert.acknowledged = true;
this.emit("alert_acknowledged", alert);
return true;
}
return false;
}
resolveAlert(alertId: string): boolean {
const alert = this.alerts.get(alertId);
if (alert) {
alert.resolvedAt = Date.now();
this.emit("alert_resolved", alert);
return true;
}
return false;
}
getActiveAlerts(): Alert[] {
return Array.from(this.alerts.values())
.filter((alert) => !alert.resolvedAt)
.sort((a, b) => b.timestamp - a.timestamp);
}
getAlertHistory(hours: number = 24): Alert[] {
const cutoff = Date.now() - hours * 60 * 60 * 1000;
return Array.from(this.alerts.values())
.filter((alert) => alert.timestamp > cutoff)
.sort((a, b) => b.timestamp - a.timestamp);
}
}
// Console Alert Channel
export class ConsoleAlertChannel implements AlertChannel {
name = "console";
async send(alert: Alert): Promise<void> {
const timestamp = new Date(alert.timestamp).toISOString();
const emoji = this.getSeverityEmoji(alert.severity);
console.log(
`๐จ ${emoji} [${alert.severity.toUpperCase()}] ${timestamp}: ${
alert.message
}`,
);
}
private getSeverityEmoji(severity: string): string {
switch (severity) {
case "critical":
return "๐ฅ";
case "high":
return "โ ๏ธ";
case "medium":
return "๐ก";
case "low":
return "๐ต";
default:
return "๐ข";
}
}
}
// Webhook Alert Channel
export class WebhookAlertChannel implements AlertChannel {
name = "webhook";
private webhookUrl: string;
constructor(webhookUrl: string) {
this.webhookUrl = webhookUrl;
}
async send(alert: Alert): Promise<void> {
const payload = {
alert_id: alert.id,
rule: alert.rule,
severity: alert.severity,
message: alert.message,
timestamp: alert.timestamp,
service: "redis-monitor",
};
const response = await fetch(this.webhookUrl, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});
if (!response.ok) {
throw new Error(
`Webhook failed: ${response.status} ${response.statusText}`,
);
}
}
}
// Email Alert Channel (using a simple SMTP service)
export class EmailAlertChannel implements AlertChannel {
name = "email";
private recipients: string[];
private smtpConfig: any;
constructor(recipients: string[], smtpConfig: any) {
this.recipients = recipients;
this.smtpConfig = smtpConfig;
}
async send(alert: Alert): Promise<void> {
// Implementation would depend on your email service
// This is a placeholder for the interface
console.log(
`๐ง Would send email alert to ${this.recipients.join(", ")}: ${
alert.message
}`,
);
}
}# Create monitoring script
sudo tee /usr/local/bin/redis-monitor.sh << 'EOF'
#!/bin/bash
# Redis cluster monitoring script
LOG_FILE="/var/log/redis/cluster-monitor.log"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
echo "[$TIMESTAMP] Starting Redis cluster health check" >> $LOG_FILE
# Check each node
for port in {7000..7005}; do
if redis-cli -p $port -a your-strong-password ping > /dev/null 2>&1; then
echo "[$TIMESTAMP] Node $port: OK" >> $LOG_FILE
else
echo "[$TIMESTAMP] Node $port: FAILED" >> $LOG_FILE
# Send alert (integrate with your monitoring system)
# curl -X POST "https://your-webhook-url" -d "Redis node $port is down"
fi
done
# Check cluster status
CLUSTER_STATE=$(redis-cli -p 7000 -a your-strong-password cluster info | grep cluster_state | cut -d: -f2 | tr -d '\r')
if [ "$CLUSTER_STATE" = "ok" ]; then
echo "[$TIMESTAMP] Cluster state: OK" >> $LOG_FILE
else
echo "[$TIMESTAMP] Cluster state: $CLUSTER_STATE" >> $LOG_FILE
# Send alert
fi
echo "[$TIMESTAMP] Health check completed" >> $LOG_FILE
EOF
sudo chmod +x /usr/local/bin/redis-monitor.sh
# Add to crontab
echo "*/5 * * * * /usr/local/bin/redis-monitor.sh" | sudo crontab -// src/monitoring/performance.ts
import { redisService } from "../services/RedisService";
class PerformanceMonitor {
private metrics: Map<string, number[]> = new Map();
async measureOperation<T>(
operation: () => Promise<T>,
operationName: string,
): Promise<T> {
const start = Date.now();
try {
const result = await operation();
const duration = Date.now() - start;
this.recordMetric(operationName, duration);
return result;
} catch (error) {
const duration = Date.now() - start;
this.recordMetric(`${operationName}_error`, duration);
throw error;
}
}
private recordMetric(operationName: string, duration: number): void {
if (!this.metrics.has(operationName)) {
this.metrics.set(operationName, []);
}
const metrics = this.metrics.get(operationName)!;
metrics.push(duration);
// Keep only last 100 measurements
if (metrics.length > 100) {
metrics.shift();
}
}
getMetrics(operationName: string): {
avg: number;
min: number;
max: number;
count: number;
} {
const metrics = this.metrics.get(operationName) || [];
if (metrics.length === 0) {
return { avg: 0, min: 0, max: 0, count: 0 };
}
const sum = metrics.reduce((a, b) => a + b, 0);
const avg = sum / metrics.length;
const min = Math.min(...metrics);
const max = Math.max(...metrics);
return { avg, min, max, count: metrics.length };
}
getAllMetrics(): Record<string, any> {
const result: Record<string, any> = {};
for (const [operationName] of this.metrics) {
result[operationName] = this.getMetrics(operationName);
}
return result;
}
}
export const performanceMonitor = new PerformanceMonitor();
// Usage example
async function example() {
const result = await performanceMonitor.measureOperation(
() => redisService.get("user:1"),
"get_user",
);
console.log("Metrics:", performanceMonitor.getMetrics("get_user"));
}# Security group configuration
aws ec2 create-security-group \
--group-name redis-cluster-sg \
--description "Redis cluster security group"
# Allow Redis cluster communication
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxx \
--protocol tcp \
--port 7000-7005 \
--source-group sg-xxxxxxxxx
# Allow cluster bus communication
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxx \
--protocol tcp \
--port 17000-17005 \
--source-group sg-xxxxxxxxx
# Allow Redis Insight (only from specific IPs)
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxx \
--protocol tcp \
--port 8001 \
--cidr 10.0.0.0/8
# Allow SSH (from bastion host only)
aws ec2 authorize-security-group-ingress \
--group-id sg-xxxxxxxxx \
--protocol tcp \
--port 22 \
--source-group sg-bastion# Generate strong password
REDIS_PASSWORD=$(openssl rand -base64 32)
echo "Redis password: $REDIS_PASSWORD"
# Create auth configuration
sudo tee /etc/redis/auth.conf << EOF
# Authentication
requirepass $REDIS_PASSWORD
masterauth $REDIS_PASSWORD
# Rename dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command KEYS ""
rename-command CONFIG "CONFIG_$REDIS_PASSWORD"
rename-command SHUTDOWN "SHUTDOWN_$REDIS_PASSWORD"
rename-command DEBUG ""
rename-command EVAL ""
rename-command SCRIPT ""
EOF
# Include auth config in main config
echo "include /etc/redis/auth.conf" >> /etc/redis/cluster/redis-7000.conf# Generate SSL certificates
sudo mkdir -p /etc/redis/ssl
cd /etc/redis/ssl
# Create CA private key
sudo openssl genrsa -out ca-key.pem 4096
# Create CA certificate
sudo openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem -subj "/C=US/ST=CA/L=San Francisco/O=MyOrg/CN=Redis CA"
# Create server private key
sudo openssl genrsa -out redis-server-key.pem 4096
# Create server certificate signing request
sudo openssl req -subj "/C=US/ST=CA/L=San Francisco/O=MyOrg/CN=redis-server" -new -key redis-server-key.pem -out server.csr
# Create server certificate
sudo openssl x509 -req -days 365 -in server.csr -CA ca.pem -CAkey ca-key.pem -out redis-server-cert.pem -CAcreateserial
# Set permissions
sudo chown -R redis:redis /etc/redis/ssl
sudo chmod 600 /etc/redis/ssl/*-key.pem# Add TLS configuration to Redis
sudo tee -a /etc/redis/cluster/redis-7000.conf << 'EOF'
# TLS Configuration
tls-port 7000
port 0
tls-cert-file /etc/redis/ssl/redis-server-cert.pem
tls-key-file /etc/redis/ssl/redis-server-key.pem
tls-ca-cert-file /etc/redis/ssl/ca.pem
tls-dh-params-file /etc/redis/ssl/redis.dh
tls-protocols "TLSv1.2 TLSv1.3"
tls-ciphersuites TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
EOF
# Generate DH parameters
sudo openssl dhparam -out /etc/redis/ssl/redis.dh 2048
sudo chown redis:redis /etc/redis/ssl/redis.dh# Check cluster status
redis-cli -p 7000 -a your-password cluster info
redis-cli -p 7000 -a your-password cluster nodes
# Fix split-brain scenario
redis-cli -p 7000 -a your-password cluster reset hard
# Recreate cluster
# Check network connectivity
for port in {7000..7005}; do
echo "Testing port $port..."
nc -zv <node-ip> $port
done# Check memory usage
redis-cli -p 7000 -a your-password info memory
# Monitor memory in real-time
watch -n 1 'redis-cli -p 7000 -a your-password info memory | grep used_memory_human'
# Check for memory leaks
redis-cli -p 7000 -a your-password memory usage <key># Monitor slow queries
redis-cli -p 7000 -a your-password slowlog get 10
# Check latency
redis-cli -p 7000 -a your-password --latency-history
# Monitor operations per second
redis-cli -p 7000 -a your-password --stat// Connection retry logic
import { redisService } from "../services/RedisService";
class ConnectionManager {
private reconnectAttempts = 0;
private maxReconnectAttempts = 5;
private reconnectDelay = 1000;
async handleConnectionError(error: Error): Promise<void> {
console.error("Redis connection error:", error);
if (this.reconnectAttempts < this.maxReconnectAttempts) {
this.reconnectAttempts++;
console.log(
`Attempting to reconnect... (${this.reconnectAttempts}/${this.maxReconnectAttempts})`,
);
await new Promise((resolve) =>
setTimeout(resolve, this.reconnectDelay),
);
try {
await redisService.ping();
console.log("Reconnected successfully");
this.reconnectAttempts = 0;
} catch (retryError) {
await this.handleConnectionError(retryError as Error);
}
} else {
console.error("Max reconnection attempts reached");
throw error;
}
}
}# Create debugging script
sudo tee /usr/local/bin/redis-debug.sh << 'EOF'
#!/bin/bash
echo "=== Redis Cluster Debug Information ==="
echo "Date: $(date)"
echo
echo "=== System Information ==="
echo "Memory usage:"
free -h
echo
echo "Disk usage:"
df -h
echo
echo "CPU usage:"
top -bn1 | grep "Cpu(s)"
echo
echo "=== Redis Process Information ==="
ps aux | grep redis
echo
echo "=== Redis Configuration ==="
for port in {7000..7005}; do
echo "--- Node $port ---"
redis-cli -p $port -a your-password config get maxmemory
redis-cli -p $port -a your-password config get maxmemory-policy
done
echo
echo "=== Cluster Information ==="
redis-cli -p 7000 -a your-password cluster info
echo
redis-cli -p 7000 -a your-password cluster nodes
echo
echo "=== Memory Usage ==="
for port in {7000..7005}; do
echo "--- Node $port ---"
redis-cli -p $port -a your-password info memory | grep used_memory_human
done
echo
echo "=== Slow Log ==="
redis-cli -p 7000 -a your-password slowlog get 5
echo
echo "=== Network Statistics ==="
netstat -tuln | grep -E ':700[0-5]|:1700[0-5]'
echo
echo "=== Log Files ==="
echo "Recent Redis logs:"
tail -20 /var/log/redis/cluster/redis-7000.log
EOF
sudo chmod +x /usr/local/bin/redis-debug.sh# Create log analysis script
sudo tee /usr/local/bin/redis-log-analyzer.sh << 'EOF'
#!/bin/bash
LOG_DIR="/var/log/redis/cluster"
ANALYSIS_FILE="/tmp/redis-analysis.txt"
echo "Redis Log Analysis - $(date)" > $ANALYSIS_FILE
echo "=================================" >> $ANALYSIS_FILE
# Analyze error patterns
echo "ERROR PATTERNS:" >> $ANALYSIS_FILE
grep -h "ERROR\|WARNING\|CRITICAL" $LOG_DIR/*.log | sort | uniq -c | sort -nr >> $ANALYSIS_FILE
echo -e "\nCONNECTION ISSUES:" >> $ANALYSIS_FILE
grep -h "Connection refused\|Connection reset\|Connection timeout" $LOG_DIR/*.log | wc -l >> $ANALYSIS_FILE
echo -e "\nMEMORY ISSUES:" >> $ANALYSIS_FILE
grep -h "OOM\|memory\|evict" $LOG_DIR/*.log | tail -10 >> $ANALYSIS_FILE
echo -e "\nCLUSTER ISSUES:" >> $ANALYSIS_FILE
grep -h "cluster\|failover\|replica" $LOG_DIR/*.log | tail -10 >> $ANALYSIS_FILE
cat $ANALYSIS_FILE
EOF
sudo chmod +x /usr/local/bin/redis-log-analyzer.sh# Install custom modules
cd /tmp
git clone https://github.com/RedisLabsModules/RedisTimeSeries.git
cd RedisTimeSeries
make
sudo cp redismodule.so /opt/redis-stack/lib/redistimeseries-custom.so
# Load custom module
echo "loadmodule /opt/redis-stack/lib/redistimeseries-custom.so" >> /etc/redis/cluster/redis-7000.conf# Create backup script
sudo tee /usr/local/bin/redis-backup.sh << 'EOF'
#!/bin/bash
BACKUP_DIR="/var/backups/redis"
DATE=$(date +%Y%m%d_%H%M%S)
S3_BUCKET="your-redis-backups"
# Create backup directory
mkdir -p $BACKUP_DIR
# Backup each node
for port in {7000..7005}; do
echo "Backing up node $port..."
# Create consistent backup
redis-cli -p $port -a your-password BGSAVE
# Wait for backup to complete
while [ $(redis-cli -p $port -a your-password LASTSAVE) -eq $(redis-cli -p $port -a your-password LASTSAVE) ]; do
sleep 1
done
# Copy RDB file
cp /var/lib/redis/cluster/dump-$port.rdb $BACKUP_DIR/dump-$port-$DATE.rdb
# Compress and upload to S3
gzip $BACKUP_DIR/dump-$port-$DATE.rdb
aws s3 cp $BACKUP_DIR/dump-$port-$DATE.rdb.gz s3://$S3_BUCKET/
done
# Clean up old backups (keep last 7 days)
find $BACKUP_DIR -name "*.rdb.gz" -mtime +7 -delete
echo "Backup completed: $DATE"
EOF
sudo chmod +x /usr/local/bin/redis-backup.sh
# Schedule daily backups
echo "0 2 * * * /usr/local/bin/redis-backup.sh" | sudo crontab -// src/services/AutoScaler.ts
import { redisService } from "./RedisService";
import AWS from "aws-sdk";
export class RedisAutoScaler {
private ec2 = new AWS.EC2();
private cloudWatch = new AWS.CloudWatch();
async checkMetrics(): Promise<{
cpuUsage: number;
memoryUsage: number;
connections: number;
opsPerSecond: number;
}> {
try {
// Get Redis metrics
const info = await redisService.client.info();
const lines = info.split("\r\n");
const metrics = {
cpuUsage: 0,
memoryUsage: 0,
connections: 0,
opsPerSecond: 0,
};
lines.forEach((line) => {
if (line.startsWith("used_memory_rss:")) {
const memory = parseInt(line.split(":")[1]);
metrics.memoryUsage =
(memory / (32 * 1024 * 1024 * 1024)) * 100; // Assuming 32GB instances
}
if (line.startsWith("connected_clients:")) {
metrics.connections = parseInt(line.split(":")[1]);
}
if (line.startsWith("instantaneous_ops_per_sec:")) {
metrics.opsPerSecond = parseInt(line.split(":")[1]);
}
});
return metrics;
} catch (error) {
console.error("Error checking metrics:", error);
throw error;
}
}
async scaleUp(): Promise<void> {
console.log("Scaling up Redis cluster...");
// Launch new instances
const params = {
LaunchTemplateName: "redis-stack-template",
MinCount: 2,
MaxCount: 2,
TagSpecifications: [
{
ResourceType: "instance",
Tags: [
{
Key: "Name",
Value: "redis-scale-up",
},
],
},
],
};
try {
const result = await this.ec2.runInstances(params).promise();
console.log(
"New instances launched:",
result.Instances?.map((i) => i.InstanceId),
);
// Wait for instances to be ready and add to cluster
// Implementation depends on your specific setup
} catch (error) {
console.error("Error scaling up:", error);
throw error;
}
}
async scaleDown(): Promise<void> {
console.log("Scaling down Redis cluster...");
// Implementation for scaling down
// Remove replicas from cluster and terminate instances
}
async autoScale(): Promise<void> {
const metrics = await this.checkMetrics();
const scaleUpThreshold = {
cpuUsage: 80,
memoryUsage: 85,
connections: 8000,
opsPerSecond: 100000,
};
const scaleDownThreshold = {
cpuUsage: 30,
memoryUsage: 40,
connections: 1000,
opsPerSecond: 10000,
};
if (
metrics.cpuUsage > scaleUpThreshold.cpuUsage ||
metrics.memoryUsage > scaleUpThreshold.memoryUsage ||
metrics.connections > scaleUpThreshold.connections ||
metrics.opsPerSecond > scaleUpThreshold.opsPerSecond
) {
await this.scaleUp();
} else if (
metrics.cpuUsage < scaleDownThreshold.cpuUsage &&
metrics.memoryUsage < scaleDownThreshold.memoryUsage &&
metrics.connections < scaleDownThreshold.connections &&
metrics.opsPerSecond < scaleDownThreshold.opsPerSecond
) {
## โ
Production Checklist
### ๐ฏ Deployment Readiness Assessment
```mermaid
graph TD
A[๐ Production Deployment] --> B{Infrastructure Ready?}
B -->|โ
Yes| C{Security Configured?}
B -->|โ No| D[โ๏ธ Setup Infrastructure]
C -->|โ
Yes| E{Monitoring Setup?}
C -->|โ No| F[๐ Configure Security]
E -->|โ
Yes| G{Performance Tested?}
E -->|โ No| H[๐ Setup Monitoring]
G -->|โ
Yes| I[๐ Deploy to Production]
G -->|โ No| J[โก Run Performance Tests]
D --> B
F --> C
H --> E
J --> G
style I fill:#e8f5e8
style A fill:#e1f5fe-
EC2 Instances Configured
- Appropriate instance types selected (r6i.xlarge recommended)
- Enhanced networking enabled
- Storage properly configured (gp3 SSD, 3000 IOPS)
- All instances in correct AZs for high availability
-
Network Configuration
- VPC with proper subnets configured
- Security groups allowing Redis traffic (ports 7000-7005, 17000-17005)
- Load balancer configured for Redis Insight access
- DNS records configured (if using custom domains)
-
System Optimization
- Kernel parameters optimized (
vm.overcommit_memory=1,net.core.somaxconn=65535) - Transparent huge pages disabled
- Memory and TCP settings optimized
- File descriptor limits increased
- Kernel parameters optimized (
-
Installation Complete
- Redis Stack installed on all nodes
- All required modules loaded (RedisJSON, RedisSearch, etc.)
- Service files created and enabled
- Proper file permissions set
-
Configuration Verified
- Strong passwords generated and configured
- Memory limits properly set
- Persistence settings configured (AOF + RDB)
- Cluster settings verified (if using cluster)
- Dangerous commands renamed or disabled
-
Cluster Setup (if applicable)
- All 6 nodes running and healthy
- Cluster formation completed successfully
- Hash slots properly distributed
- Replication working correctly
- Failover tested and verified
-
Authentication & Authorization
- Strong passwords implemented (>32 characters)
- ACL users configured (if needed)
- Dangerous commands disabled (
FLUSHALL,FLUSHDB, etc.) - Admin commands password-protected
-
Network Security
- Security groups properly configured
- No unnecessary ports exposed
- SSL/TLS certificates installed (if required)
- VPC network isolation implemented
-
System Security
- Redis running as non-root user
- File permissions correctly set
- SSH keys properly managed
- System updates applied
-
Health Monitoring
- Health check scripts deployed
- Performance monitoring active
- Log aggregation configured
- Dashboard accessible
-
Alert Configuration
- Memory usage alerts (>85%)
- Connection failure alerts
- High latency alerts (>10ms)
- Cluster state alerts
- Alert channels configured (email, webhook, etc.)
-
Backup System
- Automated backup scripts deployed
- Backup verification tested
- Restore procedures documented
- S3 backup storage configured
#!/bin/bash
# ๐งช Production Health Check Script
echo "๐ Running production health checks..."
# Test basic connectivity
echo "๐ก Testing Redis connectivity..."
for port in 6379 7000 7001 7002 7003 7004 7005; do
if redis-cli -p $port -a "$REDIS_PASSWORD" ping > /dev/null 2>&1; then
echo "โ
Port $port: OK"
else
echo "โ Port $port: FAILED"
exit 1
fi
done
# Test cluster status (if applicable)
if redis-cli -p 7000 -a "$REDIS_PASSWORD" cluster info > /dev/null 2>&1; then
echo "๐ Testing cluster status..."
CLUSTER_STATE=$(redis-cli -p 7000 -a "$REDIS_PASSWORD" cluster info | grep cluster_state | cut -d: -f2 | tr -d '\r')
if [ "$CLUSTER_STATE" = "ok" ]; then
echo "โ
Cluster state: OK"
else
echo "โ Cluster state: $CLUSTER_STATE"
exit 1
fi
# Check node count
NODE_COUNT=$(redis-cli -p 7000 -a "$REDIS_PASSWORD" cluster nodes | wc -l)
if [ "$NODE_COUNT" -eq 6 ]; then
echo "โ
All 6 cluster nodes present"
else
echo "โ Expected 6 nodes, found $NODE_COUNT"
exit 1
fi
fi
# Test memory usage
echo "๐พ Checking memory usage..."
for port in 7000 7001 7002; do
MEMORY_USAGE=$(redis-cli -p $port -a "$REDIS_PASSWORD" info memory | grep used_memory_human | cut -d: -f2 | tr -d '\r')
echo "๐ Node $port memory usage: $MEMORY_USAGE"
done
# Test performance
echo "โก Running performance test..."
redis-benchmark -h localhost -p 7000 -a "$REDIS_PASSWORD" -n 10000 -t set,get -q
if [ $? -eq 0 ]; then
echo "โ
Performance test passed"
else
echo "โ Performance test failed"
exit 1
fi
echo "๐ All health checks passed!"#!/bin/bash
# ๐ Production Performance Benchmarks
echo "๐ Running production performance benchmarks..."
# Single instance benchmarks
if redis-cli -p 6379 ping > /dev/null 2>&1; then
echo "๐ด Single Instance Benchmarks:"
redis-benchmark -h localhost -p 6379 -a "$REDIS_PASSWORD" -n 100000 -t set,get,hset,hget -P 16 -q
fi
# Cluster benchmarks
if redis-cli -p 7000 ping > /dev/null 2>&1; then
echo "๐ก Cluster Benchmarks:"
redis-benchmark -h localhost -p 7000 -a "$REDIS_PASSWORD" -n 100000 -t set,get,hset,hget -P 16 -q --cluster
fi
# Expected results for r6i.xlarge:
echo "๐ Expected Performance (r6i.xlarge):"
echo " ๐ด Single Instance: ~50,000 ops/sec"
echo " ๐ก Redis Cluster: ~200,000 ops/sec"
echo " โก Latency: <1ms for 99% of requests"
echo " ๐พ Memory: <85% usage under normal load"| Metric | ๐ด Single Instance | ๐ก Redis Cluster | ๐ข DragonDB |
|---|---|---|---|
| Operations/sec | 50,000+ | 200,000+ | 120,000+ |
| Latency (P99) | <2ms | <1ms | <1ms |
| Memory Efficiency | 85% max | 85% max | 70% max |
| Availability | 99.5% | 99.9% | 99.8% |
| Connection Limit | 10,000 | 50,000 | 30,000 |
# Emergency memory cleanup
echo "๐จ High memory usage detected!"
# Check memory distribution
redis-cli -p 7000 -a "$REDIS_PASSWORD" info memory
# Find large keys
redis-cli -p 7000 -a "$REDIS_PASSWORD" --bigkeys
# Emergency cleanup (use with caution)
# redis-cli -p 7000 -a "$REDIS_PASSWORD" memory purge
# Scale up if needed
# aws ec2 modify-instance-attribute --instance-id i-xxx --instance-type r6i.2xlarge# Diagnose connection problems
echo "๐ Diagnosing connection issues..."
# Check network connectivity
nc -zv redis-node-ip 7000
# Check Redis process
ps aux | grep redis
# Check system resources
free -h
df -h
# Restart Redis if needed (last resort)
# sudo systemctl restart redis-7000# Recovery from cluster split-brain
echo "๐ Recovering from cluster split-brain..."
# Check cluster state on all nodes
for port in 7000 7001 7002 7003 7004 7005; do
echo "Node $port:"
redis-cli -p $port -a "$REDIS_PASSWORD" cluster nodes | head -1
done
# If needed, reset and recreate cluster (DANGEROUS - DATA LOSS)
# for port in 7000 7001 7002 7003 7004 7005; do
# redis-cli -p $port -a "$REDIS_PASSWORD" cluster reset hard
# done
#
# redis-cli --cluster create \
# node1:7000 node2:7001 node3:7002 \
# node4:7003 node5:7004 node6:7005 \
# --cluster-replicas 1 -a "$REDIS_PASSWORD" --cluster-yes-
Morning Health Check
- Run health check script
- Review overnight alerts
- Check memory and CPU usage
- Verify backup completion
-
Performance Review
- Check dashboard metrics
- Review slow query log
- Monitor hit rates
- Analyze traffic patterns
-
Maintenance Tasks
- Log rotation
- Backup verification
- Security updates (if needed)
- Documentation updates
-
Performance Analysis
- Weekly performance report
- Capacity planning review
- Trend analysis
- Optimization opportunities
-
Security Review
- Access log review
- Security patch updates
- Certificate expiration check
- Password rotation (if required)
-
Disaster Recovery Test
- Backup restore test
- Failover simulation
- Recovery time measurement
- Documentation updates
- ๐ Performance: >200K ops/sec sustained
- โก Latency: <1ms for 99% of requests
- ๐พ Memory: <85% utilization
- ๐ Availability: >99.9% uptime
- ๐ฏ Hit Rate: >95% cache hits
- ๐ Connections: Handle 50K+ concurrent
- ๐ฐ Cost Savings: 60-70% vs managed Redis
- โก Performance Improvement: 5x faster than traditional DB
- ๐ง Operational Efficiency: Automated monitoring and alerts
- ๐ฑ User Experience: Sub-second response times
- ๐ Scalability: Horizontal scaling capability
Congratulations! You've successfully implemented a world-class Redis infrastructure that combines:
graph LR
A[๐ฏ Your Achievement] --> B[โก Performance]
A --> C[๐ Security]
A --> D[๐ Monitoring]
A --> E[๐ฐ Cost Efficiency]
B --> B1[200K+ ops/sec]
B --> B2[Sub-ms latency]
C --> C1[Enterprise security]
C --> C2[Network isolation]
D --> D1[Real-time dashboards]
D --> D2[Proactive alerts]
E --> E1[70% cost savings]
E --> E2[Self-hosted control]
style A fill:#e8f5e8
style B fill:#ffcdd2
style C fill:#e1f5fe
style D fill:#fff3e0
style E fill:#f3e5f5
- Sub-millisecond latency for 99% of requests
- 200,000+ operations per second sustained throughput
- Horizontal scaling capability with Redis Cluster
- Advanced caching patterns with Redis Stack modules
- Multi-layered security with authentication, network isolation, and encryption
- Dangerous command protection and access control
- Regular security auditing and monitoring
- Compliance-ready infrastructure
- Comprehensive monitoring with real-time dashboards
- Proactive alerting for issue prevention
- Automated backup and recovery procedures
- Performance optimization tools and techniques
- 60-70% cost savings compared to managed Redis services
- Predictable pricing with self-hosted infrastructure
- Resource optimization for maximum efficiency
- No vendor lock-in with open-source solutions
| Scenario | Recommended Solution | Performance | Complexity | Cost |
|---|---|---|---|---|
| ๐งช Development | Docker + Single Instance | 10K ops/sec | Low | $25/month |
| ๐ฑ Small Apps | Single Instance on r5.large | 50K ops/sec | Low | $120/month |
| ๐ Production | Redis Cluster on r6i.xlarge | 200K+ ops/sec | Medium | $1,800/month |
| ๐ฎ Modern Stack | DragonDB Alternative | 120K ops/sec | Low | $300/month |
- Deploy to staging environment first
- Run comprehensive tests with your workload
- Train your team on operations procedures
- Set up monitoring and alerting
- Document custom configurations
- Implement auto-scaling based on metrics
- Add geo-replication for global applications
- Integrate with CI/CD pipelines
- Explore Redis modules for specific use cases
- Optimize for your specific workload
- Monitor Redis community for updates
- Participate in Redis conferences and webinars
- Contribute to open-source projects
- Share knowledge with your team
- Stay updated on security best practices
- Redis Official Documentation: redis.io/documentation
- Redis Community: Redis Discord
- Stack Overflow: Tag your questions with
redis - GitHub Issues: For specific Redis modules
- AWS Support: For EC2 and infrastructure issues
- Redis University: Free online courses
- Redis Labs Blog: Latest updates and best practices
- AWS Well-Architected Framework: Infrastructure best practices
- Monitoring Best Practices: Prometheus, Grafana integration
- Performance Tuning Guides: Advanced optimization techniques
You now have a production-ready, high-performance Redis infrastructure that can handle enterprise-scale workloads while maintaining cost efficiency and operational excellence.
Remember:
- ๐ Monitor continuously - Prevention is better than cure
- ๐ Document everything - Your future self will thank you
- ๐งช Test thoroughly - Especially before production changes
- ๐ค Share knowledge - Help your team and the community
- ๐ Keep optimizing - There's always room for improvement
You've built something amazing! ๐
Happy caching! May your latencies be low and your throughput be high! โก๐
# Health check
redis-cli -p 7000 -a "password" ping
# Cluster status
redis-cli -p 7000 -a "password" cluster info
# Memory usage
redis-cli -p 7000 -a "password" info memory
# Performance test
redis-benchmark -h localhost -p 7000 -a "password" -n 10000 -t set,get -q
# Monitor real-time
redis-cli -p 7000 -a "password" --stat
# View slow queries
redis-cli -p 7000 -a "password" slowlog get 10#Redis #AWS #EC2 #Performance #Clustering #NodeJS #TypeScript #DevOps #Database #Caching #Monitoring #Production #Self-Hosted #Cost-Optimization
Document Version: 2.0
Last Updated: July 2025
Compatibility: Redis 7.2+, Node.js 18+, AWS EC2
Maintained by: Your DevOps Team โค๏ธ
This guide provides a comprehensive setup for a production-ready Redis Stack cluster on AWS EC2 with high throughput and low latency capabilities. The configuration includes:
- High Performance: Optimized for sub-millisecond latency and high throughput
- Scalability: Cluster setup with automatic failover and horizontal scaling capabilities
- Security: SSL/TLS encryption, authentication, and network security
- Monitoring: Comprehensive monitoring and alerting system
- Developer Experience: Full TypeScript/Node.js integration with Redis Insight UI
- Cost Optimization: Self-hosted solution reducing cloud service costs
- Performance: 200,000+ operations per second with sub-millisecond latency
- Reliability: 99.9% uptime with automatic failover
- Cost Savings: 60-70% cost reduction compared to managed Redis services
- Full Feature Set: All Redis Stack modules available
- Developer Productivity: Complete TypeScript integration and Redis Insight UI
- Deploy the cluster in a staging environment first
- Run comprehensive performance tests
- Train your team on operations procedures
- Set up monitoring and alerting
- Plan for disaster recovery scenarios
For additional support or questions, refer to the Redis documentation or consult with your DevOps team.