You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deep Copy: Recursively copies all nested objects/arrays.
// Deep copy methodsconstdeep1=JSON.parse(JSON.stringify(original));// Limitedconstdeep2=structuredClone(original);// Modern browsers// Custom implementationfunctiondeepClone(obj){if(obj===null||typeofobj!=='object')returnobj;if(objinstanceofDate)returnnewDate(obj);if(objinstanceofArray)returnobj.map(deepClone);constcloned={};Object.keys(obj).forEach(key=>{cloned[key]=deepClone(obj[key]);});returncloned;}
Explain the difference between == and === in JavaScript.
== (Loose Equality): Type coercion before comparison.
=== (Strict Equality): No type coercion, checks type and value.
// Type coercion examplesconsole.log(5=='5');// trueconsole.log(5==='5');// falseconsole.log(null==undefined);// trueconsole.log(null===undefined);// falseconsole.log(0==false);// trueconsole.log(0===false);// falseconsole.log(''==false);// trueconsole.log(''===false);// false// Always use === unless you specifically need coercion
What is the virtual DOM, and how does React use it?
Virtual DOM: JavaScript representation of the actual DOM. It's a programming concept where a "virtual" representation of UI is kept in memory and synced with the "real" DOM.
React's Process:
State changes trigger re-render
New virtual DOM tree is created
Diffing algorithm compares old vs new virtual DOM
Reconciliation updates only changed parts of real DOM
// Virtual DOM representationconstvirtualElement={type: 'div',props: {className: 'container',children: [{type: 'h1',props: {children: 'Hello'}},{type: 'p',props: {children: 'World'}}]}};// React Fiber (current implementation) uses:// - Incremental rendering// - Prioritization of updates// - Interruptible work
Benefits:
Predictable updates
Batch DOM updates
Cross-browser compatibility
Enables features like time-travel debugging
Explain the purpose of React hooks. How does useEffect work?
Hooks: Functions that let you use state and other React features in functional components.
Rules:
Only call at top level (not in loops/conditions)
Only call from React functions
import{useState,useEffect,useCallback,useMemo}from'react';functionUserProfile({ userId }){const[user,setUser]=useState(null);const[loading,setLoading]=useState(true);// useEffect for side effectsuseEffect(()=>{letcancelled=false;asyncfunctionfetchUser(){setLoading(true);try{constuserData=awaitapi.getUser(userId);if(!cancelled){setUser(userData);}}catch(error){if(!cancelled){console.error('Failed to fetch user:',error);}}finally{if(!cancelled){setLoading(false);}}}fetchUser();// Cleanup functionreturn()=>{cancelled=true;};},[userId]);// Dependency arrayconsthandleUpdate=useCallback((updates)=>{setUser(prev=>({ ...prev, ...updates}));},[]);constdisplayName=useMemo(()=>{returnuser ? `${user.firstName}${user.lastName}` : '';},[user]);if(loading)return<div>Loading...</div>;return(<div><h1>{displayName}</h1>{/* ... */}</div>);}
useEffect Patterns:
No deps: runs after every render
Empty deps []: runs once after mount
With deps [userId]: runs when dependencies change
What is the difference between controlled and uncontrolled components in React?
Controlled Components: React controls the form data via state.
functionControlledForm(){const[email,setEmail]=useState('');const[password,setPassword]=useState('');consthandleSubmit=(e)=>{e.preventDefault();console.log({ email, password });};return(<formonSubmit={handleSubmit}><inputtype="email"value={email}// Controlled by React stateonChange={(e)=>setEmail(e.target.value)}/><inputtype="password"value={password}onChange={(e)=>setPassword(e.target.value)}/><buttontype="submit">Submit</button></form>);}
Uncontrolled Components: DOM handles the form data, React uses refs.
functionUncontrolledForm(){constemailRef=useRef();constpasswordRef=useRef();consthandleSubmit=(e)=>{e.preventDefault();console.log({email: emailRef.current.value,password: passwordRef.current.value});};return(<formonSubmit={handleSubmit}><inputtype="email"ref={emailRef}defaultValue=""// Default value, not controlled/><inputtype="password"ref={passwordRef}defaultValue=""/><buttontype="submit">Submit</button></form>);}
When to use:
Controlled: Complex validation, conditional rendering, multiple forms
Uncontrolled: Simple forms, integrating with non-React code
What is the significance of key props in React lists?
Keys help React identify which items have changed, added, or removed. They should be stable, predictable, and unique among siblings.
// ❌ Bad - using array indexfunctionBadList({ items }){return(<ul>{items.map((item,index)=>(<likey={index}>{item.name}</li>// Problems with reordering))}</ul>);}// ✅ Good - using stable unique identifierfunctionGoodList({ items }){return(<ul>{items.map((item)=>(<likey={item.id}>{item.name}</li>))}</ul>);}// ✅ Complex example with statefunctionTodoList(){const[todos,setTodos]=useState([{id: '1',text: 'Learn React',completed: false},{id: '2',text: 'Build app',completed: false}]);return(<ul>{todos.map((todo)=>(<TodoItemkey={todo.id}// Preserves component state during reorderstodo={todo}onToggle={(id)=>{setTodos(todos.map(t=>t.id===id ? { ...t,completed: !t.completed} : t));}}/>))}</ul>);}
Without proper keys: React may reuse components incorrectly, causing state issues and performance problems.
NodeJS and Backend Development
What is event-driven architecture in NodeJS?
Event-driven architecture uses events to trigger and communicate between decoupled services. NodeJS is built around this pattern using the EventEmitter class.
constEventEmitter=require('events');classOrderServiceextendsEventEmitter{asynccreateOrder(orderData){try{// Process orderconstorder=awaitthis.saveOrder(orderData);// Emit events for different servicesthis.emit('order.created',order);this.emit('inventory.reserve',order.items);this.emit('payment.process',order.payment);returnorder;}catch(error){this.emit('order.failed',{ orderData, error });throwerror;}}asyncsaveOrder(data){// Database logicreturn{id: Date.now(), ...data};}}// Service implementationsclassInventoryService{constructor(orderService){orderService.on('inventory.reserve',this.reserveItems.bind(this));}asyncreserveItems(items){console.log('Reserving items:',items);// Reserve inventory logic}}classPaymentService{constructor(orderService){orderService.on('payment.process',this.processPayment.bind(this));}asyncprocessPayment(paymentData){console.log('Processing payment:',paymentData);// Payment processing logic}}// UsageconstorderService=newOrderService();constinventoryService=newInventoryService(orderService);constpaymentService=newPaymentService(orderService);orderService.createOrder({items: [{id: 1,quantity: 2}],payment: {amount: 100,method: 'card'}});
Benefits:
Loose coupling between services
Scalability and maintainability
Easy to add new features
Natural fit for microservices
How does NodeJS handle asynchronous operations?
NodeJS uses an event-driven, non-blocking I/O model with a single-threaded event loop and thread pool for I/O operations.
constfs=require('fs').promises;constutil=require('util');const{ Worker }=require('worker_threads');// 1. Callback Pattern (older)functionreadFileCallback(filename,callback){fs.readFile(filename,(err,data)=>{if(err)returncallback(err);callback(null,data.toString());});}// 2. Promise PatternasyncfunctionreadFilePromise(filename){try{constdata=awaitfs.readFile(filename);returndata.toString();}catch(error){throwerror;}}// 3. Stream Pattern for large filesconststream=require('stream');const{ pipeline }=require('stream/promises');asyncfunctionprocessLargeFile(inputFile,outputFile){constreadStream=fs.createReadStream(inputFile);constwriteStream=fs.createWriteStream(outputFile);consttransform=newstream.Transform({transform(chunk,encoding,callback){// Process chunkconstprocessed=chunk.toString().toUpperCase();callback(null,processed);}});awaitpipeline(readStream,transform,writeStream);}// 4. CPU-intensive tasks with Worker ThreadsfunctionfibonacciWorker(n){returnnewPromise((resolve,reject)=>{constworker=newWorker(` const { parentPort } = require('worker_threads'); function fibonacci(n) { if (n < 2) return n; return fibonacci(n - 1) + fibonacci(n - 2); } parentPort.on('message', (n) => { const result = fibonacci(n); parentPort.postMessage(result); }); `,{eval: true});worker.postMessage(n);worker.on('message',resolve);worker.on('error',reject);});}// Usage exampleasyncfunctionmain(){// Parallel async operationsconst[file1,file2,fibResult]=awaitPromise.all([readFilePromise('file1.txt'),readFilePromise('file2.txt'),fibonacciWorker(35)]);console.log('All operations completed');}
WebSockets provide full-duplex communication between client and server over a single TCP connection.
constWebSocket=require('ws');constjwt=require('jsonwebtoken');classWebSocketServer{constructor(port){this.wss=newWebSocket.Server({
port,verifyClient: this.authenticateClient.bind(this)});this.clients=newMap();// userId -> WebSocketthis.rooms=newMap();// roomId -> Set of userIdsthis.wss.on('connection',this.handleConnection.bind(this));}authenticateClient(info){try{consttoken=newURL(info.req.url,'http://localhost').searchParams.get('token');constuser=jwt.verify(token,process.env.JWT_SECRET);info.req.user=user;returntrue;}catch{returnfalse;}}handleConnection(ws,req){constuser=req.user;console.log(`User ${user.id} connected`);// Store client connectionthis.clients.set(user.id,ws);ws.on('message',(data)=>{try{constmessage=JSON.parse(data);this.handleMessage(user,message);}catch(error){ws.send(JSON.stringify({error: 'Invalid message format'}));}});ws.on('close',()=>{console.log(`User ${user.id} disconnected`);this.clients.delete(user.id);// Remove from all roomsfor(const[roomId,users]ofthis.rooms.entries()){users.delete(user.id);if(users.size===0){this.rooms.delete(roomId);}}});ws.on('error',(error)=>{console.error(`WebSocket error for user ${user.id}:`,error);});// Send welcome messagews.send(JSON.stringify({type: 'connected',message: 'Welcome to the chat!'}));}handleMessage(user,message){switch(message.type){case'join_room':
this.joinRoom(user.id,message.roomId);break;case'leave_room':
this.leaveRoom(user.id,message.roomId);break;case'chat_message':
this.broadcastToRoom(message.roomId,{type: 'chat_message',user: {id: user.id,name: user.name},message: message.content,timestamp: newDate().toISOString()});break;case'private_message':
this.sendPrivateMessage(user.id,message.targetUserId,message.content);break;}}joinRoom(userId,roomId){if(!this.rooms.has(roomId)){this.rooms.set(roomId,newSet());}this.rooms.get(roomId).add(userId);// Notify room membersthis.broadcastToRoom(roomId,{type: 'user_joined',
userId,
roomId
});}leaveRoom(userId,roomId){constroom=this.rooms.get(roomId);if(room){room.delete(userId);// Notify room membersthis.broadcastToRoom(roomId,{type: 'user_left',
userId,
roomId
});}}broadcastToRoom(roomId,message){constroom=this.rooms.get(roomId);if(room){room.forEach(userId=>{constclient=this.clients.get(userId);if(client&&client.readyState===WebSocket.OPEN){client.send(JSON.stringify(message));}});}}sendPrivateMessage(fromUserId,toUserId,content){consttargetClient=this.clients.get(toUserId);if(targetClient&&targetClient.readyState===WebSocket.OPEN){targetClient.send(JSON.stringify({type: 'private_message',from: fromUserId,
content,timestamp: newDate().toISOString()}));}}}// Start WebSocket serverconstwsServer=newWebSocketServer(8080);// Integration with HTTP server for scalingconstexpress=require('express');consthttp=require('http');constapp=express();constserver=http.createServer(app);// Attach WebSocket server to HTTP serverconstwss=newWebSocket.Server({ server });server.listen(3000,()=>{console.log('Server running on port 3000');});
Use Cases:
Real-time chat applications
Live gaming
Collaborative editing
Real-time dashboards
Trading platforms
Live notifications
When NOT to use WebSockets:
Simple request-response patterns
Infrequent updates (use polling/SSE)
High bandwidth overhead for small messages
What is the difference between monolithic and microservices architectures?
Monolithic Architecture: Single deployable unit containing all functionality.
// Monolithic Express Appconstexpress=require('express');constapp=express();// All services in one applicationclassUserService{staticasynccreateUser(userData){// User creation logicconstuser=awaitdb.users.create(userData);// Send welcome email (tightly coupled)awaitEmailService.sendWelcomeEmail(user);// Update analytics (tightly coupled)awaitAnalyticsService.trackUserRegistration(user);returnuser;}}classOrderService{staticasynccreateOrder(orderData){// Order logicconstorder=awaitdb.orders.create(orderData);// Process payment (tightly coupled)awaitPaymentService.processPayment(order);// Update inventory (tightly coupled)awaitInventoryService.updateStock(order.items);returnorder;}}// All routes in one appapp.post('/users',async(req,res)=>{constuser=awaitUserService.createUser(req.body);res.json(user);});app.post('/orders',async(req,res)=>{constorder=awaitOrderService.createOrder(req.body);res.json(order);});app.listen(3000);
// User Service (separate app)constexpress=require('express');constamqp=require('amqplib');classUserService{constructor(){this.setupMessageQueue();}asyncsetupMessageQueue(){this.connection=awaitamqp.connect('amqp://localhost');this.channel=awaitthis.connection.createChannel();}asynccreateUser(userData){constuser=awaitdb.users.create(userData);// Publish events instead of direct callsawaitthis.publishEvent('user.created',user);returnuser;}asyncpublishEvent(eventType,data){awaitthis.channel.publish('events',eventType,Buffer.from(JSON.stringify(data)));}}constuserService=newUserService();constapp=express();app.post('/users',async(req,res)=>{try{constuser=awaituserService.createUser(req.body);res.json(user);}catch(error){res.status(500).json({error: error.message});}});app.listen(3001);// Email Service (separate app)constemailApp=express();classEmailService{constructor(){this.setupMessageQueue();}asyncsetupMessageQueue(){this.connection=awaitamqp.connect('amqp://localhost');this.channel=awaitthis.connection.createChannel();// Subscribe to eventsthis.channel.consume('user.created',this.handleUserCreated.bind(this));}asynchandleUserCreated(message){constuser=JSON.parse(message.content.toString());awaitthis.sendWelcomeEmail(user);this.channel.ack(message);}asyncsendWelcomeEmail(user){// Email logicconsole.log(`Sending welcome email to ${user.email}`);}}newEmailService();emailApp.listen(3002);// API Gatewayconstgateway=express();consthttpProxy=require('http-proxy-middleware');// Route to appropriate servicesgateway.use('/api/users',httpProxy({target: 'http://localhost:3001',changeOrigin: true,pathRewrite: {'^/api/users': '/users'}}));gateway.use('/api/orders',httpProxy({target: 'http://localhost:3003',changeOrigin: true,pathRewrite: {'^/api/orders': '/orders'}}));gateway.listen(3000);
Comparison:
Aspect
Monolithic
Microservices
Deployment
Single unit
Independent services
Scalability
Scale entire app
Scale individual services
Technology
Single stack
Different stacks per service
Data
Shared database
Database per service
Communication
In-process
Network calls (HTTP/messaging)
Complexity
Lower initially
Higher operational complexity
Development
Easier coordination
Independent teams
Testing
Simpler integration
Complex distributed testing
Failure
Single point of failure
Fault isolation
How does NodeJS handle memory management?
NodeJS uses V8's garbage collector with automatic memory management, but understanding memory usage is crucial for performance.
// Memory monitoring and optimizationconstv8=require('v8');constprocess=require('process');classMemoryMonitor{staticgetMemoryUsage(){constusage=process.memoryUsage();constheapStats=v8.getHeapStatistics();return{// Process memory usagerss: `${Math.round(usage.rss/1024/1024)}MB`,// Resident Set SizeheapUsed: `${Math.round(usage.heapUsed/1024/1024)}MB`,heapTotal: `${Math.round(usage.heapTotal/1024/1024)}MB`,external: `${Math.round(usage.external/1024/1024)}MB`,// V8 heap statisticstotalHeapSize: `${Math.round(heapStats.total_heap_size/1024/1024)}MB`,usedHeapSize: `${Math.round(heapStats.used_heap_size/1024/1024)}MB`,heapSizeLimit: `${Math.round(heapStats.heap_size_limit/1024/1024)}MB`,};}staticstartMonitoring(intervalMs=5000){setInterval(()=>{constmemory=this.getMemoryUsage();console.log('Memory Usage:',memory);// Alert if memory usage is highconstheapUsedMB=parseInt(memory.heapUsed);constheapLimitMB=parseInt(memory.heapSizeLimit);if(heapUsedMB/heapLimitMB>0.8){console.warn('HIGH MEMORY USAGE DETECTED!');// Force garbage collection (only in development)if(global.gc&&process.env.NODE_ENV!=='production'){global.gc();console.log('Garbage collection forced');}}},intervalMs);}}// Memory leak examples and fixes// ❌ Memory leak: Event listeners not removedclassBadEventHandler{constructor(){this.data=newArray(1000000).fill('data');process.on('SIGUSR1',this.handleSignal.bind(this));// Missing cleanup!}handleSignal(){console.log('Signal received');}}// ✅ Good: Proper cleanupclassGoodEventHandler{constructor(){this.data=newArray(1000000).fill('data');this.handleSignal=this.handleSignal.bind(this);process.on('SIGUSR1',this.handleSignal);}handleSignal(){console.log('Signal received');}destroy(){process.removeListener('SIGUSR1',this.handleSignal);this.data=null;}}// ❌ Memory leak: Closures holding referencesfunctioncreateBadProcessor(){constlargeData=newArray(1000000).fill('data');returnfunctionprocess(input){// This closure keeps largeData in memory even if not usedreturninput.toUpperCase();};}// ✅ Good: Don't capture unnecessary variablesfunctioncreateGoodProcessor(){returnfunctionprocess(input){returninput.toUpperCase();};}// Memory-efficient data processingclassStreamProcessor{staticasyncprocessLargeFile(filePath,processLine){constfs=require('fs');constreadline=require('readline');constfileStream=fs.createReadStream(filePath);constrl=readline.createInterface({input: fileStream,crlfDelay: Infinity});letlineCount=0;forawait(constlineofrl){awaitprocessLine(line,lineCount++);// Yield control occasionallyif(lineCount%1000===0){awaitnewPromise(resolve=>setImmediate(resolve));}}}// Object pooling for frequently created objectsstaticcreateObjectPool(createFn,resetFn,size=10){constpool=[];for(leti=0;i<size;i++){pool.push(createFn());}return{acquire(){returnpool.pop()||createFn();},release(obj){if(pool.length<size){resetFn(obj);pool.push(obj);}}};}}// Usage examplesconstbufferPool=StreamProcessor.createObjectPool(()=>Buffer.allocUnsafe(1024),(buffer)=>buffer.fill(0),50);// Memory optimization tips implementationclassMemoryOptimizedCache{constructor(maxSize=1000,ttl=300000){// 5 minutesthis.cache=newMap();this.maxSize=maxSize;this.ttl=ttl;// Periodic cleanupsetInterval(()=>this.cleanup(),ttl/2);}set(key,value){// LRU evictionif(this.cache.size>=this.maxSize&&!this.cache.has(key)){constfirstKey=this.cache.keys().next().value;this.cache.delete(firstKey);}this.cache.set(key,{
value,timestamp: Date.now()});}get(key){constitem=this.cache.get(key);if(!item)returnnull;// Check TTLif(Date.now()-item.timestamp>this.ttl){this.cache.delete(key);returnnull;}// Move to end (LRU)this.cache.delete(key);this.cache.set(key,item);returnitem.value;}cleanup(){constnow=Date.now();for(const[key,item]ofthis.cache){if(now-item.timestamp>this.ttl){this.cache.delete(key);}}}}// Start monitoringMemoryMonitor.startMonitoring();module.exports={ MemoryMonitor, StreamProcessor, MemoryOptimizedCache };
Key Concepts:
Heap: Where objects are allocated
Stack: Function calls and local variables
Garbage Collection: Automatic memory cleanup
Memory Leaks: References preventing GC
Common Memory Issues:
Event listeners not removed
Closures holding references
Global variables
Circular references
Large objects in memory
Databases and System Design
What is the difference between SQL and NoSQL databases?
SQL (Relational) Databases: Structured data with predefined schemas, ACID properties.
NoSQL Databases: Flexible schemas, designed for scalability and performance.
-- SQL Example (PostgreSQL)-- Schema definitionCREATETABLEusers (
id SERIALPRIMARY KEY,
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATETABLEposts (
id SERIALPRIMARY KEY,
user_id INTEGERREFERENCES users(id),
title VARCHAR(255) NOT NULL,
content TEXT,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATETABLEcomments (
id SERIALPRIMARY KEY,
post_id INTEGERREFERENCES posts(id),
user_id INTEGERREFERENCES users(id),
content TEXTNOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Complex joinsSELECTu.name,
p.title,
COUNT(c.id) as comment_count
FROM users u
LEFT JOIN posts p ONu.id=p.user_idLEFT JOIN comments c ONp.id=c.post_idWHEREu.created_at>='2024-01-01'GROUP BYu.id, u.name, p.id, p.titleHAVINGCOUNT(c.id) >5ORDER BY comment_count DESC;
Database indexes are data structures that improve query performance by creating shortcuts to data.
-- SQL Indexing Examples-- B-Tree Index (most common)CREATEINDEXidx_users_emailON users(email);
CREATEINDEXidx_posts_user_idON posts(user_id);
-- Composite IndexCREATEINDEXidx_posts_user_dateON posts(user_id, created_at);
-- Partial IndexCREATEINDEXidx_active_usersON users(email) WHERE active = true;
-- Unique IndexCREATEUNIQUE INDEXidx_users_email_uniqueON users(email);
-- Functional IndexCREATEINDEXidx_users_email_lowerON users(LOWER(email));
-- Query execution with indexes
EXPLAIN ANALYZE SELECT*FROM users WHERE email ='john@example.com';
-- Index Scan using idx_users_email (cost=0.28..8.30 rows=1)
EXPLAIN ANALYZE SELECT*FROM posts WHERE user_id =123ORDER BY created_at DESC;
-- Index Scan using idx_posts_user_date (cost=0.29..15.32 rows=10)
// MongoDB Indexing// Single field indexdb.users.createIndex({email: 1});// 1 = ascending, -1 = descending// Compound indexdb.posts.createIndex({user_id: 1,created_at: -1});// Text index for full-text searchdb.posts.createIndex({title: "text",content: "text"});// Geospatial indexdb.locations.createIndex({coordinates: "2dsphere"});// Partial indexdb.users.createIndex({email: 1},{partialFilterExpression: {active: true}});// TTL index (automatic expiration)db.sessions.createIndex({createdAt: 1},{expireAfterSeconds: 3600}// 1 hour);// Query performance analysisdb.users.find({email: "john@example.com"}).explain("executionStats");
Index Implementation:
// Simplified B-Tree implementation for understandingclassBTreeNode{constructor(isLeaf=false){this.keys=[];this.values=[];// For leaf nodesthis.children=[];// For internal nodesthis.isLeaf=isLeaf;}}classBTreeIndex{constructor(degree=3){this.root=newBTreeNode(true);this.degree=degree;// Minimum degree}search(key,node=this.root){leti=0;// Find the position where key might existwhile(i<node.keys.length&&key>node.keys[i]){i++;}// If key foundif(i<node.keys.length&&key===node.keys[i]){returnnode.isLeaf ? node.values[i] : this.search(key,node.children[i+1]);}// If leaf node and key not foundif(node.isLeaf){returnnull;}// Recurse on appropriate childreturnthis.search(key,node.children[i]);}insert(key,value){// Implementation would handle node splits when full// This is a simplified versionif(this.root.keys.length===(2*this.degree)-1){constnewRoot=newBTreeNode();newRoot.children.push(this.root);this.splitChild(newRoot,0);this.root=newRoot;}this.insertNonFull(this.root,key,value);}// Range query supportrangeQuery(startKey,endKey,node=this.root,results=[]){if(!node)returnresults;for(leti=0;i<node.keys.length;i++){if(node.keys[i]>=startKey&&node.keys[i]<=endKey){if(node.isLeaf){results.push({key: node.keys[i],value: node.values[i]});}}if(!node.isLeaf&&node.keys[i]<=endKey){this.rangeQuery(startKey,endKey,node.children[i],results);}}if(!node.isLeaf){this.rangeQuery(startKey,endKey,node.children[node.keys.length],results);}returnresults;}}// Hash Index implementationclassHashIndex{constructor(size=1000){this.buckets=newArray(size).fill(null).map(()=>[]);this.size=size;}hash(key){lethash=0;for(leti=0;i<key.length;i++){constchar=key.charCodeAt(i);hash=((hash<<5)-hash)+char;hash=hash&hash;// Convert to 32-bit integer}returnMath.abs(hash)%this.size;}insert(key,value){constindex=this.hash(key);constbucket=this.buckets[index];// Check if key already existsfor(leti=0;i<bucket.length;i++){if(bucket[i].key===key){bucket[i].value=value;return;}}bucket.push({ key, value });}search(key){constindex=this.hash(key);constbucket=this.buckets[index];for(constitemofbucket){if(item.key===key){returnitem.value;}}returnnull;}}// Index usage analysisclassQueryOptimizer{staticanalyzeQuery(query,availableIndexes){constanalysis={suggestedIndex: null,estimatedCost: Infinity,explanation: ''};// Simple rule-based optimizationif(query.where){for(const[field,condition]ofObject.entries(query.where)){constindex=availableIndexes.find(idx=>idx.fields.includes(field)||(idx.fields[0]===field&&idx.type==='btree'));if(index){letcost=1;// Base cost for index lookupif(condition.operator==='range'){cost+=Math.log2(index.cardinality);// B-tree range scan}elseif(condition.operator==='equality'){cost=1;// Direct lookup}if(cost<analysis.estimatedCost){analysis.suggestedIndex=index.name;analysis.estimatedCost=cost;analysis.explanation=`Use ${index.name} for efficient ${condition.operator} lookup on ${field}`;}}}}returnanalysis;}}
Index Types & Use Cases:
B-Tree: General purpose, range queries, sorting
Hash: Equality lookups only, very fast
Bitmap: Low cardinality columns, analytics
GiST/GIN: Full-text search, arrays, JSON
R-Tree: Geospatial data
Index Trade-offs:
Performance: Faster reads, slower writes
Storage: Additional space overhead
Maintenance: Kept in sync with data changes
What are ACID properties in a database?
ACID ensures database transactions maintain data integrity even in the presence of errors, power failures, or concurrent access.
// ACID Implementation Examples// 1. ATOMICITY - All or nothingclassBankingService{asynctransferMoney(fromAccount,toAccount,amount){consttransaction=awaitdb.beginTransaction();try{// Both operations must succeed or both failconstfromBalance=awaitdb.query('SELECT balance FROM accounts WHERE id = $1 FOR UPDATE',[fromAccount],{ transaction });if(fromBalance[0].balance<amount){thrownewError('Insufficient funds');}awaitdb.query('UPDATE accounts SET balance = balance - $1 WHERE id = $2',[amount,fromAccount],{ transaction });awaitdb.query('UPDATE accounts SET balance = balance + $1 WHERE id = $2',[amount,toAccount],{ transaction });// Record transaction historyawaitdb.query('INSERT INTO transactions (from_account, to_account, amount, type) VALUES ($1, $2, $3, $4)',[fromAccount,toAccount,amount,'transfer'],{ transaction });awaittransaction.commit();// All operations succeedreturn{success: true,transactionId: transaction.id};}catch(error){awaittransaction.rollback();// All operations failthrowerror;}}}// 2. CONSISTENCY - Data integrity rules maintainedclassUserRegistrationService{asynccreateUser(userData){consttransaction=awaitdb.beginTransaction();try{// Check constraints before insertingconstexistingUser=awaitdb.query('SELECT id FROM users WHERE email = $1',[userData.email],{ transaction });if(existingUser.length>0){thrownewError('Email already exists');// Maintain uniqueness constraint}// Validate data integrityif(!this.isValidEmail(userData.email)){thrownewError('Invalid email format');}if(userData.age<13){thrownewError('Age must be 13 or older');// Business rule}constuser=awaitdb.query('INSERT INTO users (email, name, age) VALUES ($1, $2, $3) RETURNING *',[userData.email,userData.name,userData.age],{ transaction });// Create associated profile (referential integrity)awaitdb.query('INSERT INTO user_profiles (user_id, created_at) VALUES ($1, $2)',[user[0].id,newDate()],{ transaction });awaittransaction.commit();returnuser[0];}catch(error){awaittransaction.rollback();throwerror;}}isValidEmail(email){return/^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);}}// 3. ISOLATION - Concurrent transactions don't interfereclassInventoryService{asyncpurchaseItem(itemId,quantity,userId){// Use appropriate isolation levelconsttransaction=awaitdb.beginTransaction({isolationLevel: 'READ_COMMITTED'// or SERIALIZABLE for strict isolation});try{// Lock row to prevent concurrent modificationsconstitem=awaitdb.query('SELECT * FROM inventory WHERE id = $1 FOR UPDATE',[itemId],{ transaction });if(!item.length){thrownewError('Item not found');}if(item[0].stock<quantity){thrownewError('Insufficient stock');}// Update inventoryawaitdb.query('UPDATE inventory SET stock = stock - $1, updated_at = NOW() WHERE id = $2',[quantity,itemId],{ transaction });// Create orderconstorder=awaitdb.query('INSERT INTO orders (user_id, item_id, quantity, status) VALUES ($1, $2, $3, $4) RETURNING *',[userId,itemId,quantity,'pending'],{ transaction });awaittransaction.commit();returnorder[0];}catch(error){awaittransaction.rollback();throwerror;}}// Demonstration of isolation levelsasyncdemonstrateIsolationLevels(){// READ UNCOMMITTED - Can see uncommitted changes (dirty reads)constreadUncommitted=awaitdb.beginTransaction({isolationLevel: 'READ_UNCOMMITTED'});// READ COMMITTED - Only see committed changes (default in most DBs)constreadCommitted=awaitdb.beginTransaction({isolationLevel: 'READ_COMMITTED'});// REPEATABLE READ - Same data throughout transactionconstrepeatableRead=awaitdb.beginTransaction({isolationLevel: 'REPEATABLE_READ'});// SERIALIZABLE - Strongest isolation, prevents all anomaliesconstserializable=awaitdb.beginTransaction({isolationLevel: 'SERIALIZABLE'});}}// 4. DURABILITY - Committed changes persistclassAuditLogger{asynclogCriticalAction(action,userId,details){consttransaction=awaitdb.beginTransaction();try{// Write to audit logawaitdb.query('INSERT INTO audit_log (action, user_id, details, timestamp) VALUES ($1, $2, $3, $4)',[action,userId,JSON.stringify(details),newDate()],{ transaction });// Force write to disk (WAL - Write-Ahead Logging)awaitdb.query('SELECT pg_switch_wal()',[],{ transaction });awaittransaction.commit();// Additional durability measuresawaitthis.writeToSecondaryStorage(action,userId,details);returntrue;}catch(error){awaittransaction.rollback();throwerror;}}asyncwriteToSecondaryStorage(action,userId,details){// Write to file system, S3, or another database for extra durabilityconstfs=require('fs').promises;constlogEntry={timestamp: newDate().toISOString(),
action,
userId,
details
};awaitfs.appendFile('/var/log/critical-actions.log',JSON.stringify(logEntry)+'\n');}}// Practical ACID implementation with connection poolingclassDatabaseManager{constructor(){this.pool=newPool({host: 'localhost',database: 'myapp',user: 'postgres',password: 'password',max: 20,// Max connectionsidleTimeoutMillis: 30000,connectionTimeoutMillis: 2000,});}asyncexecuteTransaction(operations){constclient=awaitthis.pool.connect();try{awaitclient.query('BEGIN');constresults=[];for(constoperationofoperations){constresult=awaitoperation(client);results.push(result);}awaitclient.query('COMMIT');returnresults;}catch(error){awaitclient.query('ROLLBACK');throwerror;}finally{client.release();}}// Saga pattern for distributed transactionsasyncexecuteSaga(steps){constcompensations=[];constresults=[];try{for(conststepofsteps){constresult=awaitstep.execute();results.push(result);if(step.compensate){compensations.unshift(step.compensate);// LIFO order}}returnresults;}catch(error){// Execute compensations in reverse orderfor(constcompensateofcompensations){try{awaitcompensate();}catch(compensateError){console.error('Compensation failed:',compensateError);}}throwerror;}}}// Usage exampleconstdbManager=newDatabaseManager();asyncfunctioncomplexBusinessOperation(){returnawaitdbManager.executeTransaction([// Each operation receives the client connectionasync(client)=>{returnawaitclient.query('INSERT INTO orders (user_id, total) VALUES ($1, $2) RETURNING id',[userId,total]);},async(client)=>{returnawaitclient.query('UPDATE inventory SET stock = stock - $1 WHERE product_id = $2',[quantity,productId]);},async(client)=>{returnawaitclient.query('INSERT INTO order_items (order_id, product_id, quantity) VALUES ($1, $2, $3)',[orderId,productId,quantity]);}]);}
ACID in Different Database Types:
Traditional RDBMS (PostgreSQL, MySQL):
Full ACID compliance
Strong consistency
Complex transactions
NoSQL Approaches:
MongoDB: ACID for single documents, multi-document transactions available
Load Balancing: Distribute traffic across instances
Rate Limiting: Protect against abuse
Circuit Breakers: Prevent cascade failures
Async Processing: Queue heavy operations
Monitoring: Real-time metrics and alerts
Auto-scaling: Dynamic resource allocation
Performance Optimizations:
HTTP/2 and HTTP/3 support
Compression (gzip, brotli)
Connection pooling
Keep-alive connections
Minimize middleware overhead
Use CDN for static assets
Database query optimization
Implement proper indexing
This architecture can handle millions of requests per second through proper implementation of these patterns and continuous optimization based on monitoring data.
Data Structures and Algorithms (DSA)
Given an array, find the maximum sum of any contiguous subarray.