Docs Menu

Docs HomeDevelop ApplicationsMongoDB Manual

Production Notes

On this page

  • Platform Support
  • Platform Support Notes
  • Platform Support Matrix
  • MongoDB dbPath
  • Concurrency
  • Data Consistency
  • Networking
  • Hardware Considerations
  • Architecture
  • Compression
  • Clock Synchronization
  • Platform Specific Considerations
  • Performance Monitoring
  • Backups

This page details system configurations that affect MongoDB, especially when running in production.

Note

MongoDB Atlas is a cloud-hosted database-as-a-service. MongoDB Cloud Manager, a hosted service, and Ops Manager, an on-premise solution, provide monitoring, backup, and automation of MongoDB instances. For documentation, see Atlas documentation, the MongoDB Cloud Manager documentation and Ops Manager documentation

To learn more about running in production for deployments hosted in MongoDB Atlas, see Atlas Production Notes.

For running in production, refer to the Recommended Platforms for operating system recommendations.

MongoDB requires the following minimum x86_64 microarchitectures:

  • For Intel x86_64, MongoDB requires one of:

    • a Sandy Bridge or later Core processor, or

    • a Tiger Lake or later Celeron or Pentium processor.

  • For AMD x86_64, MongoDB requires:

    • a Bulldozer or later processor.

Starting in MongoDB 5.0, mongod, mongos, and the legacy mongo shell no longer support x86_64 platforms which do not meet this minimum microarchitecture requirement.

MongoDB on arm64 requires the ARMv8.2-A or later microarchitecture.

Starting in MongoDB 5.0, mongod, mongos, and the legacy mongo shell no longer support arm64 platforms which do not meet this minimum microarchitecture requirement.

Note

MongoDB no longer supports single board hardware lacking the proper CPU architecture (Raspberry Pi 4). See Compatibility Changes in MongoDB 5.0 for more information.

Platform
Architecture
Edition
7.0
6.0
5.0
4.4
Amazon Linux 2023
x86_64
Enterprise
Amazon Linux 2023
x86_64
Community
Amazon Linux V2
x86_64
Enterprise
Amazon Linux V2
x86_64
Community
Debian 12
x86_64
Enterprise
Debian 12
x86_64
Community
Debian 11
x86_64
Enterprise
5.0.8+
Debian 11
x86_64
Community
5.0.8+
Debian 10
x86_64
Enterprise
Debian 10
x86_64
Community
Debian 9
x86_64
Enterprise
Debian 9
x86_64
Community
RHEL/Rocky/Alma/Oracle Linux 9.0+ [1]
x86_64
Enterprise
6.0.4+
RHEL/Rocky/Alma/Oracle Linux 9.0+ [1]
x86_64
Community
6.0.4+
RHEL/Rocky/Alma/Oracle Linux 8.0+ [1]
x86_64
Enterprise
RHEL/Rocky/Alma/Oracle Linux 8.0+ [1]
x86_64
Community
RHEL/CentOS/Oracle Linux 7.0+ [1]
x86_64
Enterprise
RHEL/CentOS/Oracle Linux 7.0+ [1]
x86_64
Community
RHEL/CentOS/Oracle Linux 6.2+ [1]
x86_64
Enterprise
RHEL/CentOS/Oracle Linux 6.2+ [1]
x86_64
Community
SLES 15
x86_64
Enterprise
SLES 15
x86_64
Community
SLES 12
x86_64
Enterprise
SLES 12
x86_64
Community
Ubuntu 22.04
x86_64
Enterprise
6.0.4+
Ubuntu 22.04
x86_64
Community
6.0.4+
Ubuntu 20.04
x86_64
Enterprise
Ubuntu 20.04
x86_64
Community
Ubuntu 18.04
x86_64
Enterprise
Ubuntu 18.04
x86_64
Community
Ubuntu 16.04
x86_64
Enterprise
Ubuntu 16.04
x86_64
Community
Windows 11
x86_64
Enterprise
Windows 11
x86_64
Community
Windows Server 2022
x86_64
Enterprise
Windows Server 2022
x86_64
Community
Windows Server 2019
x86_64
Enterprise
Windows Server 2019
x86_64
Community
Windows 10 / Server 2016
x86_64
Enterprise
Windows 10 / Server 2016
x86_64
Community
macOS 13
x86_64
Enterprise
macOS 13
x86_64
Community
macOS 12
x86_64
Enterprise
macOS 12
x86_64
Community
macOS 11
x86_64
Enterprise
macOS 11
x86_64
Community
macOS 10.15
x86_64
Enterprise
macOS 10.15
x86_64
Community
macOS 10.14
x86_64
Enterprise
macOS 10.14
x86_64
Community
macOS 10.13
x86_64
Enterprise
macOS 10.13
x86_64
Community
macOS 13
arm64
Enterprise
macOS 13
arm64
Community
macOS 12
arm64
Enterprise
macOS 12
arm64
Community
macOS 11
arm64
Enterprise
macOS 11
arm64
Community
Amazon Linux 2023
arm64
Enterprise
Amazon Linux 2023
arm64
Community
Amazon Linux 2
arm64
Enterprise
4.4.4+
Amazon Linux 2
arm64
Community
4.4.4+
RHEL/CentOS/Rocky/Alma 9
arm64
Enterprise
RHEL/CentOS/Rocky/Alma 9
arm64
Community
RHEL/CentOS/Rocky/Alma 8
arm64
Enterprise
4.4.4+
RHEL/CentOS/Rocky/Alma 8
arm64
Community
4.4.4+
Ubuntu 22.04
arm64
Enterprise
6.0.4+
Ubuntu 22.04
arm64
Community
6.0.4+
Ubuntu 20.04
arm64
Enterprise
Ubuntu 20.04
arm64
Community
Ubuntu 18.04
arm64
Enterprise
Ubuntu 18.04
arm64
Community
Ubuntu 16.04
arm64
Enterprise
RHEL/Rocky/Alma 9
ppc64le
Enterprise
RHEL/Rocky/Alma 8
ppc64le
Enterprise
RHEL/CentOS 7
ppc64le
Enterprise
6.0.7+
RHEL/Rocky/Alma 9
s390x
Enterprise
RHEL/Rocky/Alma 8
s390x
Enterprise
5.0.9+
RHEL/CentOS 7
s390x
Enterprise
RHEL/CentOS 7
s390x
Community
[1](1, 2, 3, 4, 5, 6, 7, 8) On Oracle Linux, MongoDB only supports the Red Hat Compatible Kernel.
[2] MongoDB versions 5.0 and greater are tested against SLES 12 service pack 5. Earlier versions of MongoDB are tested against SLES 12 with no service pack.
[3] MongoDB versions 7.0 and later are tested against SLES 15 service pack 4. Earlier versions of MongoDB are tested against SLES 15 with no service pack.
[4] MongoDB version 7.0 is built and tested against RHEL 7.9. Earlier versions of MongoDB are tested against RHEL 7 and assume forward compatibility.

While MongoDB supports a variety of platforms, the following operating systems are recommended for production use on x86_64 architecture:

  • Amazon Linux

  • Debian

  • RHEL [5]

  • SLES

  • Ubuntu LTS

  • Windows Server

For best results, run the latest version of your platform. If you run an older version, make sure that your version is supported by its provider.

[5] MongoDB on-premises products released for RHEL version 8.0+ are compatible with Rocky Linux version 8.0+ and AlmaLinux version 8.0+, contingent upon those distributions meeting their obligation to deliver full RHEL compatibility.

Tip

See also:

Be sure you have the latest stable release.

All MongoDB releases are available on the MongoDB Download Center page. The MongoDB Download Center is a good place to verify the current stable release, even if you are installing via a package manager.

For other MongoDB products, refer either to the MongoDB Download Center page or their respective documentation.

The files in the dbPath directory must correspond to the configured storage engine. mongod will not start if dbPath contains data files created by a storage engine other than the one specified by --storageEngine.

mongod must possess read and write permissions for the specified dbPath.

If you use an antivirus (AV) scanner or an endpoint detection and response (EDR) scanner, configure your scanner to exclude the database storage path and the database log path from the scan.

The data files in the database storage path are compressed. Additionally, if you use the encrypted storage engine, the data files are also encrypted. The I/O and CPU costs to scan these files may significantly decrease performance without providing any security benefits.

If you don't exclude the directories in your database storage path and database log path, the scanner could quarantine or delete important files. Missing or quarantined files can corrupt your database and crash your MongoDB instance.

WiredTiger supports concurrent access by readers and writers to the documents in a collection. Clients can read documents while write operations are in progress, and multiple threads can modify different documents in a collection at the same time.

Tip

See also:

Allocate Sufficient RAM and CPU provides information about how WiredTiger takes advantage of multiple CPU cores and how to improve operation throughput.

MongoDB uses write ahead logging to an on-disk journal. Journaling guarantees that MongoDB can quickly recover write operations that were written to the journal but not written to data files in cases where mongod terminated due to a crash or other serious failure. See Journaling for more information.

Starting in MongoDB 3.6, you can use causally consistent sessions to read your own writes, if the writes request acknowledgment.

Prior to MongoDB 3.6, in order to read your own writes you must issue your write operation with { w: "majority" } write concern, and then issue your read operation with primary read preference, and either "majority" or "linearizable" read concern.

Write Concern describes the level of acknowledgment requested from MongoDB for write operations. The level of the write concerns affects how quickly the write operation returns. When write operations have a weak write concern, they return quickly. With stronger write concerns, clients must wait after sending a write operation until MongoDB confirms the write operation at the requested write concern level. With insufficient write concerns, write operations may appear to a client to have succeeded, but may not persist in some cases of server failure.

See the Write Concern document for more information about choosing an appropriate write concern level for your deployment.

Always run MongoDB in a trusted environment, with network rules that prevent access from all unknown machines, systems, and networks. As with any sensitive system that is dependent on network access, your MongoDB deployment should only be accessible to specific systems that require access, such as application servers, monitoring services, and other MongoDB components.

Important

By default, authorization is not enabled, and mongod assumes a trusted environment. Enable authorization mode as needed. For more information on authentication mechanisms supported in MongoDB as well as authorization in MongoDB, see Authentication and Role-Based Access Control.

For additional information and considerations on security, refer to the documents in the Security Section, specifically:

For Windows users, consider the Windows Server Technet Article on TCP Configuration when deploying MongoDB on Windows.

Changed in version 3.6: MongoDB 3.6 removes the deprecated HTTP interface and REST API to MongoDB.

Earlier versions of MongoDB provide an HTTP interface to check the status of the server and, optionally, run queries. The HTTP interface is disabled by default. Do not enable the HTTP interface in production environments.

Avoid overloading the connection resources of a mongod or mongos instance by adjusting the connection pool size to suit your use case. Start at 110-115% of the typical number of current database requests, and modify the connection pool size as needed. Refer to the Connection Pool Options for adjusting the connection pool size.

The connPoolStats command returns information regarding the number of open connections to the current database for mongos and mongod instances in sharded clusters.

See also Allocate Sufficient RAM and CPU.

MongoDB is designed specifically with commodity hardware in mind and has few hardware requirements or limitations. MongoDB's core components run on little-endian hardware, primarily x86/x86_64 processors. Client libraries (i.e. drivers) can run on big or little endian systems.

At a minimum, ensure that each mongod or mongos instance has access to two real cores or one multi-core physical CPU.

The WiredTiger storage engine is multithreaded and can take advantage of additional CPU cores. Specifically, the total number of active threads (i.e. concurrent operations) relative to the number of available CPUs can impact performance:

  • Throughput increases as the number of concurrent active operations increases up to the number of CPUs.

  • Throughput decreases as the number of concurrent active operations exceeds the number of CPUs by some threshold amount.

The threshold depends on your application. You can determine the optimum number of concurrent active operations for your application by experimenting and measuring throughput. The output from mongostat provides statistics on the number of active reads/writes in the (ar|aw) column.

With WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.

Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either:

  • 50% of (RAM - 1 GB), or

  • 256 MB.

For example, on a system with a total of 4GB of RAM the WiredTiger cache will use 1.5GB of RAM (0.5 * (4 GB - 1 GB) = 1.5 GB). Conversely, a system with a total of 1.25 GB of RAM will allocate 256 MB to the WiredTiger cache because that is more than half of the total RAM minus one gigabyte (0.5 * (1.25 GB - 1 GB) = 128 MB < 256 MB).

Note

In some instances, such as when running in a container, the database can have memory constraints that are lower than the total system memory. In such instances, this memory limit, rather than the total system memory, is used as the maximum RAM available.

To see the memory limit, see hostInfo.system.memLimitMB.

By default, WiredTiger uses Snappy block compression for all collections and prefix compression for all indexes. Compression defaults are configurable at a global level and can also be set on a per-collection and per-index basis during collection and index creation.

Different representations are used for data in the WiredTiger internal cache versus the on-disk format:

  • Data in the filesystem cache is the same as the on-disk format, including benefits of any compression for data files. The filesystem cache is used by the operating system to reduce disk I/O.

  • Indexes loaded in the WiredTiger internal cache have a different data representation to the on-disk format, but can still take advantage of index prefix compression to reduce RAM usage. Index prefix compression deduplicates common prefixes from indexed fields.

  • Collection data in the WiredTiger internal cache is uncompressed and uses a different representation from the on-disk format. Block compression can provide significant on-disk storage savings, but data must be uncompressed to be manipulated by the server.

Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes.

To adjust the size of the WiredTiger internal cache, see storage.wiredTiger.engineConfig.cacheSizeGB and --wiredTigerCacheSizeGB. Avoid increasing the WiredTiger internal cache size above its default value.

Note

The storage.wiredTiger.engineConfig.cacheSizeGB limits the size of the WiredTiger internal cache. The operating system will use the available free memory for filesystem cache, which allows the compressed MongoDB data files to stay in memory. In addition, the operating system will use any free RAM to buffer file system blocks and file system cache.

To accommodate the additional consumers of RAM, you may have to decrease WiredTiger internal cache size.

The default WiredTiger internal cache size value assumes that there is a single mongod instance per machine. If a single machine contains multiple MongoDB instances, then you should decrease the setting to accommodate the other mongod instances.

If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set storage.wiredTiger.engineConfig.cacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container. See memLimitMB.

To view statistics on the cache and eviction rate, see the wiredTiger.cache field returned from the serverStatus command.

When using encryption, CPUs equipped with AES-NI instruction-set extensions show significant performance advantages. If you are using MongoDB Enterprise with the Encrypted Storage Engine, choose a CPU that supports AES-NI for better performance.

Tip

See also:

MongoDB has good results and a good price-performance ratio with SATA SSD (Solid State Disk).

Use SSD if available and economical.

Commodity (SATA) spinning drives are often a good option, as the random I/O performance increase with more expensive spinning drives is not that dramatic (only on the order of 2x). Using SSDs or increasing RAM may be more effective in increasing I/O throughput.

Running MongoDB on a system with Non-Uniform Memory Access (NUMA) can cause a number of operational problems, including slow performance for periods of time and high system process usage.

When running MongoDB servers and clients on NUMA hardware, you should configure a memory interleave policy so that the host behaves in a non-NUMA fashion. MongoDB checks NUMA settings on start up when deployed on Linux (since version 2.0) and Windows (since version 2.6) machines. If the NUMA configuration may degrade performance, MongoDB prints a warning.

Tip

See also:

On Windows, memory interleaving must be enabled through the machine's BIOS. Consult your system documentation for details.

On Linux, you must disable zone reclaim and also ensure that your mongod and mongos instances are started by numactl, which is generally configured through your platform's init system. You must perform both of these operations to properly disable NUMA for use with MongoDB.

  1. Disable zone reclaim with one of the following commands:

    echo 0 | sudo tee /proc/sys/vm/zone_reclaim_mode
    sudo sysctl -w vm.zone_reclaim_mode=0
  2. Ensure that mongod and mongos are started by numactl. This is generally configured through your platform's init system. Run the following command to determine which init system is in use on your platform:

    ps --no-headers -o comm 1
    • If "systemd", your platform uses the systemd init system, and you must follow the steps in the systemd tab below to edit your MongoDB service file(s).

    • If "init", your platform uses the SysV Init system, and you do not need to perform this step. The default MongoDB init script for SysV Init includes the necessary steps to start MongoDB instances via numactl by default.

    • If you manage your own init scripts (i.e. you are not using either of these init systems), you must follow the steps in the Custom init scripts tab below to edit your custom init script(s).


For more information, see the Documentation for /proc/sys/vm/*.

MongoDB performs best where swapping can be avoided or kept to a minimum, as retrieving data from swap will always be slower than accessing data in RAM. However, if the system hosting MongoDB runs out of RAM, swapping can prevent the Linux OOM Killer from terminating the mongod process.

Generally, you should choose one of the following swap strategies:

  1. Assign swap space on your system, and configure the kernel to only permit swapping under high memory load, or

  2. Do not assign swap space on your system, and configure the kernel to disable swapping entirely

See Set vm.swappiness for instructions on configuring swap on your Linux system following these guidelines.

Note

If your MongoDB instance is hosted on a system that also runs other software, such as a webserver, you should choose the first swap strategy. Do not disable swap in this case. If possible, it is highly recommended that you run MongoDB on its own dedicated system.

For optimal performance in terms of the storage layer, use disks backed by RAID-10. RAID-5 and RAID-6 do not typically provide sufficient performance to support a MongoDB deployment.

With the WiredTiger storage engine, WiredTiger objects may be stored on remote file systems if the remote file system conforms to ISO/IEC 9945-1:1996 (POSIX.1). Because remote file systems are often slower than local file systems, using a remote file system for storage may degrade performance.

If you decide to use NFS, add the following NFS options to your /etc/fstab file:

  • bg

  • hard

  • nolock

  • noatime

  • nointr

Depending on your kernel version, some of these values may already be set as the default. Consult your platform's documentation for more information.

For improved performance, consider separating your database's data, journal, and logs onto different storage devices, based on your application's access and write pattern. Mount the components as separate filesystems and use symbolic links to map each component's path to the device storing it.

For the WiredTiger storage engine, you can also store the indexes on a different storage device. See storage.wiredTiger.engineConfig.directoryForIndexes.

Note

Using different storage devices will affect your ability to create snapshot-style backups of your data, since the files will be on different devices and volumes.

For local block devices attached to a virtual machine instance via the hypervisor or hosted by a cloud hosting provider, the guest operating system should use the cfq scheduler for best performance. The cfq scheduler allows the operating system to defer I/O scheduling to the underlying hypervisor.

Note

The noop scheduler can be used for scheduling if all the following conditions are met:

  • The hypervisor is VMware.

  • A replica set topology or sharded cluster is used.

  • The virtual machines are located on the same virtual host.

  • The underlying storage containing the DBpaths is a common LUN blockstore.

For physical servers, the operating system should use a deadline scheduler. The deadline scheduler caps maximum latency per request and maintains a good disk throughput that is best for disk-intensive database applications.

See the Replica Set Architectures document for an overview of architectural considerations for replica set deployments.

See Sharded Cluster Production Architecture for an overview of recommended sharded cluster architectures for production deployments.

WiredTiger can compress collection data using one of the following compression library:

  • snappy
    Provides a lower compression rate than zlib or zstd but has a lower CPU cost than either.
  • zlib
    Provides better compression rate than snappy but has a higher CPU cost than both snappy and zstd.
  • zstd
    Provides better compression rate than both snappy and zlib and has a lower CPU cost than zlib.

By default, WiredTiger uses snappy compression library. To change the compression setting, see storage.wiredTiger.collectionConfig.blockCompressor.

WiredTiger uses prefix compression on all indexes by default.

MongoDB components keep logical clocks for supporting time-dependent operations. Using NTP to synchronize host machine clocks mitigates the risk of clock drift between components. Clock drift between components increases the likelihood of incorrect or abnormal behavior of time-dependent operations like the following:

  • If the underlying system clock of any given MongoDB component drifts a year or more from other components in the same deployment, communication between those members may become unreliable or halt altogether.

    The maxAcceptableLogicalClockDriftSecs parameter controls the amount of acceptable clock drift between components. Clusters with a lower value of maxAcceptableLogicalClockDriftSecs have a correspondingly lower tolerance for clock drift.

  • Two cluster members with different system clocks may return different values for operations that return the current cluster or system time, such as Date(), NOW, and CLUSTER_TIME.

  • Features which rely on timekeeping may have inconsistent or unpredictable behavior in clusters with clock drift between MongoDB components.

When running MongoDB in production on Linux, you should use Linux kernel version 2.6.36 or later, with either the XFS or EXT4 filesystem. If possible, use XFS as it generally performs better with MongoDB.

With the WiredTiger storage engine, using XFS is strongly recommended for data bearing nodes to avoid performance issues that may occur when using EXT4 with WiredTiger.

  • In general, if you use the XFS file system, use at least version 2.6.25 of the Linux Kernel.

  • If you use the EXT4 file system, use at least version 2.6.28 of the Linux Kernel.

  • On Red Hat Enterprise Linux and CentOS, use at least version 2.6.18-194 of the Linux kernel.

MongoDB uses the GNU C Library (glibc) on Linux. Generally, each Linux distro provides its own vetted version of this library. For best results, use the latest update available for this system-provided version. You can check whether you have the latest version installed by using your system's package manager. For example:

  • On RHEL / CentOS, the following command updates the system-provided GNU C Library:

    sudo yum update glibc
  • On Ubuntu / Debian, the following command updates the system-provided GNU C Library:

    sudo apt-get install libc6

Important

MongoDB requires a filesystem that supports fsync() on directories. For example, HGFS and Virtual Box's shared folders do not support this operation.

“Swappiness” is a Linux kernel setting that influences the behavior of the Virtual Memory manager. The vm.swappiness setting ranges from 0 to 100: the higher the value, the more strongly it prefers swapping memory pages to disk over dropping pages from RAM.

  • A setting of 0 disables swapping entirely [6].

  • A setting of 1 permits the kernel to swap only to avoid out-of-memory problems.

  • A setting of 60 tells the kernel to swap to disk often, and is the default value on many Linux distributions.

  • A setting of 100 tells the kernel to swap aggressively to disk.

MongoDB performs best where swapping can be avoided or kept to a minimum. As such you should set vm.swappiness to either 1 or 0 depending on your application needs and cluster configuration.

Note

Most system and user processes run within a cgroup, which, by default, sets the vm.swappiness to 60. If you are running RHEL / CentOS, set vm.force_cgroup_v2_swappiness to 1 to ensure that the specified vm.swappiness value overrides any cgroup defaults.

[6] With Linux kernel versions previous to 3.5, or RHEL / CentOS kernel versions previous to 2.6.32-303, a vm.swappiness setting of 0 would still allow the kernel to swap in certain emergency situations.

Note

If your MongoDB instance is hosted on a system that also runs other software, such as a webserver, you should set vm.swappiness to 1. If possible, it is highly recommended that you run MongoDB on its own dedicated system.

  • To check the current swappiness setting on your system, run:

    cat /proc/sys/vm/swappiness
  • To change swappiness on your system:

    1. Edit the /etc/sysctl.conf file and add the following line:

      vm.swappiness = 1
    2. Run the following command to apply the setting:

      sudo sysctl -p

Note

If you are running RHEL / CentOS and using a tuned performance profile, you must also edit your chosen profile to set vm.swappiness to 1 or 0.

For all MongoDB deployments:

  • Use the Network Time Protocol (NTP) to synchronize time among your hosts. This is especially important in sharded clusters.

For the WiredTiger storage engines, consider the following recommendations:

  • Turn off atime for the storage volume containing the database files.

  • Adjust the ulimit settings for your platform according to the recommendations in the ulimit reference. Low ulimit values will negatively affect MongoDB when under heavy use and can lead to failed connections to MongoDB processes and loss of service.

    Note

    Starting in MongoDB 4.4, a startup error is generated if the ulimit value for number of open files is under 64000.

  • Disable Transparent Huge Pages. MongoDB performs better with normal (4096 bytes) virtual memory pages. See Transparent Huge Pages Settings.

  • Disable NUMA in your BIOS. If that is not possible, see MongoDB on NUMA Hardware.

  • Configure SELinux for MongoDB if you are not using the default MongoDB directory paths or ports.

    Note

    If you are using SELinux, any MongoDB operation that requires server-side JavaScript will result in segfault errors. Disable Server-Side Execution of JavaScript describes how to disable execution of server-side JavaScript.

For the WiredTiger storage engine:

  • Set the readahead setting between 8 and 32 regardless of storage media type (spinning disk, SSD, etc.).

    Higher readahead commonly benefits sequential I/O operations. Since MongoDB disk access patterns are generally random, using higher readahead settings provides limited benefit or potential performance degradation. As such, for optimal MongoDB performance, set readahead between 8 and 32, unless testing shows a measurable, repeatable, and reliable benefit in a higher readahead value. MongoDB commercial support can provide advice and guidance on alternate readahead configurations.

On Linux platforms, you may observe one of the following statements in the MongoDB log:

<path to TLS/SSL libs>/libssl.so.<version>: no version information available (required by /usr/bin/mongod)
<path to TLS/SSL libs>/libcrypto.so.<version>: no version information available (required by /usr/bin/mongod)

These warnings indicate that the system's TLS/SSL libraries are different from the TLS/SSL libraries that the mongod was compiled against. Typically these messages do not require intervention; however, you can use the following operations to determine the symbol versions that mongod expects:

objdump -T <path to mongod>/mongod | grep " SSL_"
objdump -T <path to mongod>/mongod | grep " CRYPTO_"

These operations will return output that resembles one the of the following lines:

0000000000000000 DF *UND* 0000000000000000 libssl.so.10 SSL_write
0000000000000000 DF *UND* 0000000000000000 OPENSSL_1.0.0 SSL_write

The last two strings in this output are the symbol version and symbol name. Compare these values with the values returned by the following operations to detect symbol version mismatches:

objdump -T <path to TLS/SSL libs>/libssl.so.1*
objdump -T <path to TLS/SSL libs>/libcrypto.so.1*

This procedure is neither exact nor exhaustive: many symbols used by mongod from the libcrypto library do not begin with CRYPTO_.

For MongoDB instances using the WiredTiger storage engine, performance on Windows is comparable to performance on Linux.

This section describes considerations when running MongoDB in some of the more common virtual environments.

For all platforms, consider Scheduling.

There are two performance configurations to consider:

  • Reproducible performance for performance testing or benchmarking, and

  • Raw maximum performance

To tune performance on EC2 for either configuration, you should:

  • Enable AWS Enhanced Networking for your instance. Not all instance types support Enhanced Networking.

    To learn more about Enhanced Networking, see to the AWS documentation.

  • Set tcp_keepalive_time to 120.

If you are concerned more about reproducible performance on EC2, you should also:

  • Use provisioned IOPS for the storage, with separate devices for journal and data. Do not use the ephemeral (SSD) storage available on most instance types as their performance changes moment to moment. (The i series is a notable exception, but very expensive.)

  • Disable DVFS and CPU power saving modes.

  • Disable hyperthreading.

  • Use numactl to bind memory locality to a single socket.

Use Premium Storage. Microsoft Azure offers two general types of storage: Standard storage, and Premium storage. MongoDB on Azure has better performance when using Premium storage than it does with Standard storage.

The TCP idle timeout on the Azure load balancer is 240 seconds by default, which can cause it to silently drop connections if the TCP keepalive on your Azure systems is greater than this value. You should set tcp_keepalive_time to 120 to ameliorate this problem.

Note

You will need to restart mongod and mongos processes for new system-wide keepalive settings to take effect.

  • To view the keepalive setting on Linux, use one of the following commands:

    sysctl net.ipv4.tcp_keepalive_time

    Or:

    cat /proc/sys/net/ipv4/tcp_keepalive_time

    The value is measured in seconds.

    Note

    Although the setting name includes ipv4, the tcp_keepalive_time value applies to both IPv4 and IPv6.

  • To change the tcp_keepalive_time value, you can use one of the following commands, supplying a <value> in seconds:

    sudo sysctl -w net.ipv4.tcp_keepalive_time=<value>

    Or:

    echo <value> | sudo tee /proc/sys/net/ipv4/tcp_keepalive_time

    These operations do not persist across system reboots. To persist the setting, add the following line to /etc/sysctl.conf, supplying a <value> in seconds, and reboot the machine:

    net.ipv4.tcp_keepalive_time = <value>

    Keepalive values greater than 300 seconds, (5 minutes) will be overridden on mongod and mongos sockets and set to 300 seconds.

  • To view the keepalive setting on Windows, issue the following command:

    reg query HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters /v KeepAliveTime

    The registry value is not present by default. The system default, used if the value is absent, is 7200000 milliseconds or 0x6ddd00 in hexadecimal.


  • To change the KeepAliveTime value, use the following command in an Administrator Command Prompt, where <value> is expressed in hexadecimal (e.g. 120000 is 0x1d4c0):

    reg add HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ /t REG_DWORD /v KeepAliveTime /d <value>

    Windows users should consider the Windows Server Technet Article on KeepAliveTime for more information on setting keepalive for MongoDB deployments on Windows systems. Keepalive values greater than or equal to 600000 milliseconds (10 minutes) will be ignored by mongod and mongos.

MongoDB is compatible with VMware.

VMware supports memory overcommitment, where you can assign more memory to your virtual machines than the physical machine has available. When memory is overcommitted, the hypervisor reallocates memory between the virtual machines. VMware's balloon driver (vmmemctl) reclaims the pages that are considered least valuable.

The balloon driver resides inside the guest operating system. When the balloon driver expands, it may induce the guest operating system to reclaim memory from guest applications, which can interfere with MongoDB's memory management and affect MongoDB's performance.

Do not disable the balloon driver and memory overcommitment features. This can cause the hypervisor to use its swap which will affect performance. Instead, map and reserve the full amount of memory for the virtual machine running MongoDB. This ensures that the balloon will not be inflated in the local operating system if there is memory pressure in the hypervisor due to an overcommitted configuration.

Ensure that virtual machines stay on a specific ESX/ESXi host by setting VMware's affinity rules. If you must manually migrate a virtual machine to another host and the mongod instance on the virtual machine is the primary, you must first step down the primary and then shut down the instance.

Follow the networking best practices for vMotion and the VMKernel. Failure to follow the best practices can result in performance problems and affect replica set and sharded cluster high availability mechanisms.

You can clone a virtual machine running MongoDB. You might use this function to deploy a new virtual host to add as a member of a replica set.

MongoDB is compatible with KVM.

KVM supports memory overcommitment, where you can assign more memory to your virtual machines than the physical machine has available. When memory is overcommitted, the hypervisor reallocates memory between the virtual machines. KVM's balloon driver reclaims the pages that are considered least valuable.

The balloon driver resides inside the guest operating system. When the balloon driver expands, it may induce the guest operating system to reclaim memory from guest applications, which can interfere with MongoDB's memory management and affect MongoDB's performance.

Do not disable the balloon driver and memory overcommitment features. This can cause the hypervisor to use its swap which will affect performance. Instead, map and reserve the full amount of memory for the virtual machine running MongoDB. This ensures that the balloon will not be inflated in the local operating system if there is memory pressure in the hypervisor due to an overcommitted configuration.

On Linux, use the iostat command to check if disk I/O is a bottleneck for your database. Specify a number of seconds when running iostat to avoid displaying stats covering the time since server boot.

For example, the following command will display extended statistics and the time for each displayed report, with traffic in MB/s, at one second intervals:

iostat -xmt 1

Key fields from iostat:

  • %util: this is the most useful field for a quick check, it indicates what percent of the time the device/drive is in use.

  • avgrq-sz: average request size. Smaller number for this value reflect more random IO operations.

bwm-ng is a command-line tool for monitoring network use. If you suspect a network-based bottleneck, you may use bwm-ng to begin your diagnostic process.

To make backups of your MongoDB database, please refer to MongoDB Backup Methods Overview.

←  AdministrationOperations Checklist →