Skip to main content

Concepts

Overview

MongoDB Enterprise is a distributed, document-oriented NoSQL database. With PDS, users can leverage core capabilities of Kubernetes to deploy and manage their MongoDB clusters using a cloud-native, container-based model.

PDS follows the release lifecycle of MongoDB, meaning new releases are available on PDS shortly after they have become GA. Likewise, older versions of MongoDB are removed once they have reached end-of-life. The list of currently supported MongoDB versions can be found here. Currently, PDS only supports the enterprise edition of MongoDB.

Like all data services in PDS, MongoDB deployments run within Kubernetes. PDS includes a component called the PDS Deployments Operator which manages the deployment of all PDS data services, including MongoDB. The operator extends the functionality of Kubernetes by implementing a custom resource called mongodb. The mongodb resource type represents a MongoDB cluster allowing standard Kubernetes tools to be used to manage MongoDB clusters, including scaling, monitoring, and upgrading.

You can learn more about the PDS architecture here.

Clustering

MongoDB is a distributed system meant to be run as a cluster of individual nodes. PDS deploys a MongoDB cluster as a sharded cluster, which consists of three distinct services - a config server, a shard server, and mongos proxies. Each of these components can be highly available, depending on the number of nodes the MongoDB cluster is configured with. The config server and shard server are both deployed as replica sets. Currently, these components cannot be independently scaled, meaning a three-node MongoDB cluster will consist of a three-node config server replica set, a three-node shard replica set, and three mongos proxies.

Although PDS allows MongoDB to be deployed with any number of nodes, high availability can only be achieved when running with three or more nodes. Smaller clusters should only be considered in development environments.

When deployed as a multi-node cluster, sharded cluster components, deployed as multi-container pods within a statefulSet, automatically discover each other to form the cluster. Node and service discovery within a MongoDB cluster is also automatic when pods are deleted and recreated or when additional nodes are added to the cluster via horizontal scaling.

PDS leverages the high availability provided natively by MongoDB replica sets by spreading MongoDB nodes across Kubernetes worker nodes when possible. PDS utilizes the Storage Orchestrator for Kubernetes (Stork), in combination with Kubernetes storageClasses, to intelligently schedule pods. By provisioning storage from different worker nodes, and then scheduling pods to be hyper-converged with the volumes, PDS deploys MongoDB clusters in a way that maximizes fault tolerance, even if entire worker nodes or availability zones are impacted.

Refer to the PDS architecture to learn more about Stork and scheduling.

Replication

Application Replication

MongoDB natively replicates data across members of a replica set. That is, each member of a replica set maintains a copy of the same data set.

The number of primary and secondary nodes configured for the config server and shard server components (which are both deployed as replica sets) is determined by the number of nodes the deployment is configured with.

Application replication is used in MongoDB to accomplish load balancing and high availability. With data replicated to multiple nodes, more nodes are able to respond to requests for data. Likewise, in the event of node failure, the additional redundancy provided by multi-node replica sets allows the cluster to continue serving client requests.

Storage Replication

PDS takes data replication further. Storage in PDS is provided by Portworx Enterprise, which itself allows data to be replicated at the storage level.

Each MongoDB node in PDS is configured to store data to a persistentVolume which is provisioned by Portworx Enterprise. These Portworx volumes can, in turn, be configured to replicate data to multiple volume replicas. It is recommended to use two volume replicas in PDS in combination with application replication in MongoDB.

While the additional level of replication will result in write amplification, the storage-level replication solves a different problem than what is accomplished by application-level replication. Specifically, storage-level replication reduces the amount of downtime in failure scenarios. That is, it reduces Recovery Time Objective (RTO).

Portworx volume replicas are able to ensure that data are replicated to different Kubernetes worker nodes or even different availability zones. This maximizes the ability of the Kubernetes API scheduler to schedule MongoDB pods.

For example, in cases where Kubernetes worker nodes are unschedulable, pods can be scheduled on other worker nodes where data already exist. What’s more is that pods can start instantly and service traffic immediately without waiting for MongoDB to replicate data to the pod.

Reference Architecture

For illustrative purposes, consider the following availability zone-aware MongoDB and Portworx storage topology:

MongoDB reference architecture1

A host and/or storage failure results in the immediate rescheduling of the MongoDB pod to a different worker node within the same availability zone. Stork can intelligently select the new worker node during the rescheduling process such that it is a worker node that contains a replica of the node’s underlying Portworx volume, maintaining full replication while ensuring hyperconvergence for optimal performance:

MongoDB reference architecture2

Additionally, in the event that an entire availability zone is unavailable, MongoDB’s native replication capability ensures that client applications are not impacted by promoting secondary replica set members through an automatic failover.

MongoDB reference architecture3

Configuration

Application-level MongoDB configurations can be tuned by specifying setting-specific environment variables within the deployment’s application configuration template.

You can learn more about application configuration templates here. The list of all MongoDB configurations that may be overridden is itemized in the MongoDB service’s reference documentation here.

Scaling

Because of the ease with which databases can be deployed and managed in PDS, it is common for customers to deploy many of them. Likewise, because it is easy to scale databases in PDS, it is common for users to start with smaller clusters and then add resources when needed. PDS supports both vertical scaling (i.e., CPU/memory), as well as horizontal scaling (i.e., nodes) of MongoDB clusters.

Vertical Scaling

Vertical scaling refers to the process of adding hardware resources to (scaling up) or removing hardware resources from (scaling down) database nodes. In the context of Kubernetes and PDS, these hardware resources are virtualized CPU and memory.

PDS allows the shard server component to be dynamically reprovisioned with additional or reduced CPU and/or memory. These changes are applied in a rolling fashion across each of the nodes in the MongoDB cluster. For multi-node MongoDB clusters, this operation is non-disruptive to client applications.

Horizontal Scaling

Horizontal scaling refers to the process of adding database nodes to (scaling out) or removing database nodes from (scaling in) a cluster. Currently, PDS supports only scaling out of clusters. This is accomplished by adding additional pods to the existing MongoDB cluster by updating the replica count of the statefulSet. With MongoDB, increasing the number of nodes in a MongoDB cluster equates to scaling out the individual shard and config server replica sets as well as the proxies that accompany them; currently, additional shards cannot be added.

Connectivity

PDS manages pod-specific services whose type is determined based on user input (currently, only LoadBalancer and ClusterIP types are supported). This ensures a stable IP address beyond the lifecycle of the pod. For each of these services, an accompanying DNS record will be managed automatically by PDS.

Connection information provided through the PDS UI will reflect this by providing users with an array of server endpoints, allowing users to connect to their sharded cluster via mongos proxies. When possible, client configurations should include multiple endpoints, mitigating connectivity issues in the event of any individual proxy being unavailable.

Backups

Backups with MongoDB deployments use the mongodump utility, which is leveraged to backup all shard data and to store the data to a dedicated Portworx volume. The backup volume is shared across all nodes in the MongoDB cluster which allows for state to be shared between the MongoDB nodes. By using a dedicated volume, the backup process does not interfere with the performance of the database. Once mongodump has completed the backup, the PDS Backup Operator makes a copy of the volume to a remote object store. This feature, which is provided by Portworx Enterprise’s cloud snapshot functionality, allows PDS to fulfill the 3-2-1 backup strategy – three copies of data on two types of storage media with one copy offsite.

Restores

Restoring MongoDB in PDS is done out-of-place. That is to say, PDS will deploy a new MongoDB cluster and restore the data to the new cluster. This prevents users from accidentally overwriting data and allows users to stand up multiple copies of their databases for debugging or forensic purposes.

note

PDS does not currently support restoration of individual nodes or of databases.

The new MongoDB deployment will have the same properties as the original at the time of backup. If a backup was taken of a MongoDB cluster with three nodes, and the cluster was later scaled to five nodes, restoring from backup would deploy a three-node cluster. Effectively, PDS restores the state of the deployment, not just the data. As with restoring any database, loss of data is likely to occur. Because user passwords are stored within the database, restoring from backup will revert any password changes. Therefore, in order to ensure access to the database, users must manage their own credentials.

Monitoring

PDS collects metrics exposed via an exporter component that’s deployed alongside MongoDB nodes and dispatches them to the PDS control plane. While the PDS UI includes dashboards that report on subset of these metrics to display high-level health monitors, all collected metrics are made queryable via a Prometheus endpoint. You can learn more about integrating the PDS control plane’s Prometheus endpoint into your existing monitoring environment here. The list of all MongoDB metrics collected and made available by PDS is itemized in the MongoDB service’s reference documentation here.

Was this page helpful?