Skip to content

Managing Data Nodes#

A detailed description of different startup types of data nodes and how the startup process works is described in RonDB Startup.

The multithreaded data node uses the ndbmtd binary and is generally started as follows:

ndbmtd --ndb-connectstring=<string> --ndb-nodeid=<node-id>

At cluster start, all active data nodes are started in parallel and will only succeed once at least a partial cluster has started.

Partial vs. Partitioned Cluster Starts#

Both a partial and a partitioned cluster contain at least one living node per node group. A partial cluster however also contains either a majority of total nodes or at least one node group with all nodes.

One can therefore have multiple partitioned clusters, but not multiple partial clusters. A partitioned cluster is a cluster split up by a network partition.

One cluster can have a majority of nodes, whilst another contains one node group with all nodes. However, in this case, the cluster with the majority of nodes will be missing a node group.

Whilst it makes sense to start a partial cluster if a data node is not able to start up, RonDB tries to avoid partitioned clusters at all costs.

For both cluster starts and cluster restarts (note: not rolling restarts), we can provide information that affects how to handle partial and partitioned clusters at startup. These are:

  • StartPartialTimeout: This specifies how many seconds we will wait before starting up a partial cluster. This defaults to 30 seconds.

  • StartPartitionedTimeout: This parameter specifies how many seconds to wait before we decide to start a partitioned cluster. It defaults to 0 seconds, meaning it will wait forever. The most common example of this is when the arbitrator has killed a partition and the data nodes of this partition want to start up again. If this parameter is 0, the data nodes will only be able to start up again once the partition has healed. This parameter is mainly used for testing purposes and it is highly recommended to not change it. The only way of re-integrating nodes started in this excluded manner is to perform an initial node restart.

  • --nowait-nodes=<list>: This is a startup parameter for the data nodes. It specifies a set of nodes that don’t have to be waited for. This is a manual intervention where we know that these nodes are not up and running and thus there is no risk of network partitioning coming from avoiding those nodes.

Resilient Starts#

When data nodes start up, they will attempt to connect to the management server. In order to make this process more resilient, one can use the following parameters:

  1. --connect-retries, default is 12

  2. --connect-retry-delay (seconds), default is 5s

This may help avoid unnecessary binary restarts.

Automatic Restarts#

Depending on whether the data nodes are started with process managers such as systemd or supervisord, one may want to start the data nodes as daemons (default) or foreground processes. The latter is done by using the parameter --foreground. This is also useful for containerization purposes.

If one is not using a process manager, one can still use the built-in automatic restart functionality of the data nodes. This is done by setting StopOnError=1. It creates an additional angel process that will restart the data node if it fails. This therefore avoids the need for manual intervention.

Initial Node (Re-)Starts#

Apart from initial cluster starts, we may have initial node (re-)starts. These can be run in the following cases:

  • The node’s file system was corrupted

  • Moving node to a different machine / VM with local NVMe drives

  • Complex software up-/downgrades; Downgrades are needed for upgrade roll-backs

Whenever running an initial node restart, it is important to have at least one running data node per node group. This is where the (re-)starting data node will fetch its data from. The obvious disadvantage of an initial node (re-)start is therefore also that it will place a significantly larger load on the running data nodes.

Running an initial node start is done in the following order:

  1. Stop running data node

  2. Remove disk data files (tablespaces and UNDO log files) manually

  3. Optional: Remove all files in DataDir manually

  4. Run ndbmtd with the --initial flag

Since the initial node starts remove all local data before starting, they should not be run if not necessary.