Frequently Asked Questions:
Changed in version 2.2.
MongoDB allows multiple clients to read and write a single corpus of data using a locking system to ensure that all clients receive the same view of the data and to prevent multiple applications from modifying the exact same pieces of data at the same time. Locks help guarantee that all writes to a single document occur either in full or not at all.
MongoDB uses a readers-writer  lock that allows concurrent reads access to a database but gives exclusive access to a single write operation.
When a read lock exists, many read operations may use this lock. However, when a write lock exists, a single write operation holds the lock exclusively, and no other read or write operations may share the lock.
Locks are “writer greedy,” which means writes have preference over reads. When both a read and write are waiting for a lock, MongoDB grants the lock to the write.
|||You may be familiar with a “readers-writer” lock as “multi-reader” or “shared exclusive” lock. See the Wikipedia page on Readers-Writer Locks for more information.|
Changed in version 2.2.
Beginning with version 2.2, MongoDB implements locks on a per-database basis for most read and write operations. Some global operations, typically short lived operations involving multiple databases, still require a global “instance” wide lock. Before 2.2, there is only one “global” lock per mongod instance.
For example, if you have six databases and one takes a write lock, the other five are still available for read and write.
For reporting on lock utilization information on locks, use any of the following methods:
Specifically, the locks document in the output of serverStatus, or the locks field in the current operation reporting provides insight into the type of locks and amount of lock contention in your mongod instance.
To terminate an operation, use db.killOp().
In some situations, read and write operations can yield their locks.
Long running read and write operations, such as queries, updates, and deletes, yield under many conditions. In MongoDB 2.0, operations yielded based on time slices and the number of operations waiting for the actively held lock. After 2.2, more adaptive algorithms allow operations to yield based on predicted disk access (i.e. page faults).
New in version 2.0: Read and write operations will yield their locks if the mongod receives a page fault or fetches data that is unlikely to be in memory. Yielding allows other operations that only need to access documents that are already in memory to complete while mongod loads documents into memory.
Additionally, write operations that affect multiple documents (i.e. update() with the multi parameter) will yield periodically to allow read operations during these long write operations. Similarly, long running read locks will yield periodically to ensure that write operations have the opportunity to complete.
Changed in version 2.2: The use of yielding expanded greatly in MongoDB 2.2. Including the “yield for page fault.” MongoDB tracks the contents of memory and predicts whether data is available before performing a read. If MongoDB predicts that the data is not in memory a read operation yields its lock while MongoDB loads the data to memory. Once data is available in memory, the read will reacquire the lock to complete the operation.
Changed in version 2.2.
The following table lists common database operations and the types of locks they use.
|Issue a query||Read lock|
|Get more data from a cursor||Read lock|
|Insert data||Write lock|
|Remove data||Write lock|
|Update data||Write lock|
|Map-reduce||Read lock and write lock, unless operations are specified as non-atomic. Portions of map-reduce jobs can run concurrently.|
|Create an index||Building an index in the foreground, which is the default, locks the database for extended periods of time.|
|eval||Write lock. If used with the nolock lock option, the eval option does not take a write lock and cannot write data to the database.|
Certain administrative commands can exclusively lock the database for extended periods of time. In some deployments, for large databases, you may consider taking the mongod instance offline so that clients are not affected. For example, if a mongod is part of a replica set, take the mongod offline and let other members of the set service load while maintenance is in progress.
The following administrative operations require an exclusive (i.e. write) lock on the database for extended periods:
The following administrative commands lock the database but only hold the lock for a very short time:
The following MongoDB operations lock multiple databases:
Sharding improves concurrency by distributing collections over multiple mongod instances, allowing shard servers (i.e. mongos processes) to perform any number of operations concurrently to the various downstream mongod instances.
In replication, when MongoDB writes to a collection on the primary, MongoDB also writes to the primary’s oplog, which is a special collection in the local database. Therefore, MongoDB must lock both the collection’s database and the local database. The mongod must lock both databases at the same time to keep the database valid and ensure that write operations, even with replication, are “all-or-nothing” operations.
In replication, MongoDB does not apply writes serially to secondaries. Secondaries collect oplog entries in batches and then apply those batches in parallel. Secondaries do not allow reads while applying the write operations, and apply write operations in the order that they appear in the oplog.
MongoDB can apply several writes in parallel on replica set secondaries, in two phases: