Skip to content

Instantly share code, notes, and snippets.

@dipaktelangre
Created December 4, 2019 06:33
Show Gist options
  • Select an option

  • Save dipaktelangre/3f78c9a89bea6b20e6dbd78966d2054e to your computer and use it in GitHub Desktop.

Select an option

Save dipaktelangre/3f78c9a89bea6b20e6dbd78966d2054e to your computer and use it in GitHub Desktop.
Some of the most important commands and references information for Mongo DBA
-------------Manually Install MongoDB Service -----------
mkdir c:\mongodata\inMemory\db
mkdir c:\mongodata\mmapv1\db
mkdir c:\mongodata\wiredTiger\db
mkdir c:\mongodata\inMemory\log
mkdir c:\mongodata\mmapv1\log
mkdir c:\mongodata\wiredTiger\log
sc.exe create MongoInMemory binPath= "\"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe\" --service --config=\"C:\Program Files\MongoDB\Server\3.4\inMemory.cfg\"" DisplayName= "MongoInMemory" start= "auto"
sc.exe create MongoWiredTiger binPath= "\"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe\" --service --config=\"C:\Program Files\MongoDB\Server\3.4\wiredTiger.cfg\"" DisplayName= "MongoWiredTiger" start= "auto"
sc.exe create MongoMmapv1 binPath= "\"C:\Program Files\MongoDB\Server\3.4\bin\mongod.exe\" --service --config=\"C:\Program Files\MongoDB\Server\3.4\mmapv1.cfg\"" DisplayName= "MongoMmapv1" start= "auto"
net start MongoInMemory
net start MongoWiredTiger
net start MongoMmapv1
sc delete MongoInMemory --- Delete the service
sc queryex MongoInMemory ---> serach Serices
sc queryex type= service state= inactive --> Search InActive Seriveces
-------start mongo service ------
net start mongoInMemory (mongoWiredTiger, mongoMmapv1)
net stop mongoInMemory
netsh interface ip show address --> show ip address
------------ get mongo server status from mongo shell-----
$db.serverStatus()
--------------- Mongo server web -----
http://localhost:28017/ -------> port 1000 above mongod server port
-------------- Dump mongo db -----------
mongodump --host [..] --port [...] --out [...]
mongodump --oplog --port 30100 -----> dump live changes only in replication
mongodump --db control_tower_dev -----> single db
mongodump --db ... --collection ... --> comllection
mongodump --host mongodb1.example.net --port 3017 --username user --password "pass" --out /opt/backup/mongodump-2013-10-24
------------- Restore
mongorestore dirctory
mongorestore --drop sourcedir ---------> Overwrite
mongorestore --drop --collection collection --db database sourcedir
mongorestore --oplogReplay --> backup cerated with oplog that will be relay
---------------- Import/Export
mongoexport --db database --collection collname ------stio stream
mongoexport --db database --collection collname > filename ---> stream into file
mongoexport --db database --collection collname --fields f1,f2
mongoexport --query "{_id:{$gt:2}}"
--------------- indexing
db.collection.getIndexes() -----> get the indexes
db.system.indexes.find()
db.collection.ensureIndex({field:1}) --> 1 -- assending
db.collection.dropIndex('indexName')
db.collection.ensureIndex({'doc.subdoc': 1})
db.collection.find({}).explain() ---explain query plan
db.collection.find({}).explain('executionStats')
sparse index ----
db.collection.stats() -----> statistic of the collection
sparse uniquie index --- > Can allowed multiple null/missing values
TTL Indexes --> Time to live
--------------- Replica set
mongod --dbpath c:\freshDb --replSet r1 --oplogSize 100
mongo --> connect to mongod service
> rs.initiate() ---> Initialize the replica set
> rs.config()
> var conf = rs.conf() --> store conf
> rs.reconfig(conf) --> reconfigure
> rs.status() --> replica health
----------Elections ----------
Only one instance can be primary at a time in replica set
Elections is to choose the primary instance
Arbiter --> Votes but dont have data
Max of 7 voter in elections
Start 3 mongod server on different port
start mongod --dbpath \freshDb\m1 --port 30001 --replSet r1
start mongod --dbpath \freshDb\m2 --port 30002 --replSet r1
start mongod --dbpath \freshDb\m3 --port 30003 --replSet r1
Connect to the serices and initiate replica sets
mongo --port 30001
> rs.initiate()
> rs.add("DIPTE-E7450:30002") --> machineName:port add other machine to replicaset
> rs.config()
> rs.add("DIPTE-E7450:30003", true) --> Add as arbiter
> rs.config()
> rs.status()
> for(i=0; i< 1000; i++){db.demo.save({_id:i})} --- Write some documents to primary and check replication
> db = connect("DIPTE-E7450:30002/demo") --> connecting to 30002 server through shell
> db.demo.find() ---> error for safty reason
> db.getMongo().setSlaveOk()
> db.getReplicationInfo()
> rs.printReplicationInfo()
--------Failover --------
Kill primary on 30001 while shell connecting to secondery
> rs.status()
Secondery should becoms primary
Kill the arbiter
Primary will change to secondery no one can vote > 50 %
Start the server again
start mongod --dpath \freshDb\m2 --port 30002 --replSet r1
-------------Priority ----------
priority will do favor to elect as primary if its healthy
> var conf = rs.conf()
> conf.members[0].priority = 10
> rs.reconfig(conf)
> rs.conf() ---> alwase reconfig from primary or force the changes by rs.reconfig(conf,{force:true})
> db.getMongo() --> which server connected to
----------------- Step down ------
Changes in live system
Stepdown the primary and take over by other
>rs.stepDown(3*60) -- stepDown for 3 min
>db.adminCommand({replSetStepDown: 86400, force: 1}) Stepdown forrce for 24 Hrs
> rs.status()
-------------- Freez -------------
freez secondery not to become primary for given time
> rs.freez(5*60) --> 5 min
------------- Hidden ------------
Application cant see it so it cant be primary
But can vote
> var conf = rs.conf()
> conf.members[0].priority = 0 ---> never become primary
> conf.members[0].hidden = true
> rs.reconfig(conf)
------------ Chaining ------------
Decide which instance should replicate from which one
Multiple secondery can replicate from secondery instead of single primary
------------- Write conrcen ----------
mongo will aknowledge only once condition fullfil
> db.demo.insert({x: 'Hi'}, {writeConcern: {w:2}}) --- w : write to disk, j: write to journel
wait for write from 2 machine
> db.demo.insert({x: 'Hi'}, {writeConcern: {w:'majority', j: true}})
---------------- Sharding --------------
Horizontal spliting of the collection
Part of the collection can reside on the shard collection
Shard --> server holding part of the collection
Config --> stored config about which part collection reside where
mongos --> shard server, router for application commands, decide routing internally
start mongod --configsvr --dbpath \freshDb\config1 replSet r1 ---> start configServer
start mongos --configdb r1/DIPTE-E7450 --> start shard server by specifying config server replicaset/host
start other servers
start mongod --dbpath \freshDb\m1 --replSet r1 --port 30001
start mongod --dbpath \freshDb\m2 --replSet r1 --port 30002
Configure shards from sharding server mongos
mongo
add shard
> sh.addShard("r1/DIPTE-E7450:30001")
> sh.addShard("r1/DIPTE-E7450:30002")
> sh.status()
------------------------- Monitoring --------------
---------------------- Indexes ------------------------
db.collection.getIndexes() --> list all indexes
--- List all Indexes on a Database
db.getCollectionNames().forEach(function(collection) {
indexes = db[collection].getIndexes();
print("Indexes for " + collection + ":");
printjson(indexes);
});
db.Shipment.createIndex({tenantId: 1}) --create index
db.Shipment.dropIndex({tenantId: 1}) -- Drop Index
db.Shipment.dropIndexes() -- Drop all the indexes
db.Shipment.reIndex() -- Drop all indexes and rebuild it
-------------------------------
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment