Skip to content

Starting programs#

This section will provide some details about how to startup the cluster and how to start individual programs. For a complete and updated list of the command parameters that can be used for each program the MySQL Reference manual can be consulted.

Most of the information in this chapter is likely to be put into whatever DevOps solution you will use for production. The managed RonDB solves all of those issues such that you as a developer can focus on using RonDB.

Thus this chapter is targeting open source users that want to control the RonDB programs on their own. It is also useful for users of the managed RonDB service when looking for logs generated by the various nodes in RonDB.

Order of starting#

To start a cluster we first start up the RonDB configuration database. The configuration database consists of one or two RonDB management servers (ndb_mgmd). It is sufficient to have one management server in the cluster. If the management server goes down the cluster will continue to operate. If one of the node fails and it tries to restart, it cannot come up again until a management server is up and running again. Therefore we also support using two management servers.

The management server writes the cluster log. This is a log with messages of interesting events happening in the cluster. If no management server is up and running there will be no cluster log messages written for the period when no ndb_mgmd is up and running.

When half of the nodes fail in the cluster an arbitrator is used to decide which part of the cluster will survive. This is normally handled by one of the RonDB management servers, but it can also be done by any of the API nodes or MySQL Servers if no management server is up and running.

Normally for high availability installation there is at least two management servers defined in the cluster.

It is possible to start up RonDB data nodes before the management server and also MySQL Servers. The RonDB data nodes will simply wait until the specified management server is up and running before it starts. The RonDB data node cannot complete the startup without access to a management server. The management server provides the cluster configuration to the starting nodes, this configuration is required to be able to start up the data node, the API nodes and any MySQL server using the NDB storage engine.

When starting a data node, a MySQL Server or an API node one can specify how long time to wait for a management server by using the parameters --connect-retries and --connect-retry-delay, --connect-retry-delay is given in seconds and defaults to 5. --connect-retries defaults to 12 and at least one attempt will be made even if set to 0. To wait forever one have set the number of retries to a very high number.

When all ndb_mgmd's have been started the next step is to start up all the RonDB data nodes (ndbmtd). The first time they start up they will perform an initial start. When this initial start is completed the MySQL servers (mysqld) and API nodes can connect to the cluster. When all the MySQL servers and all API nodes have started up the cluster is ready for action.

Upgrade of a RonDB is performed in the same order. First restart the management server(s). Next restart the data nodes and finally restart the MySQL Servers and API nodes. A very safe path is to perform a rolling upgrade of the RonDB software one node at a time. One can restart one node per node group when restarting the RonDB data nodes. The MySQL Servers and API nodes can be restarted as many or as few as the application can handle at a time.

Node ids#

In RonDB each node need to have a node id. My personal preference is to control the node id selection myself when starting up the programs. This gives more control over node id allocation. If no node id is provided, the management server and data nodes will select a node id for you. This could possibly be used for MySQL servers and API nodes but certainly makes it harder to manage the cluster since all logs, all tables with log information (ndbinfo tables presented later in this book), all SHOW commands for the cluster, uses the node id to represent a node.

The possibility to dynamically choose a node id was mainly implemented to make it easy to get a cluster up and running. But to maintain a cluster, it is certainly easier if the node id is provided when you start up a program in the cluster.

When starting up a management server and a RonDB data node you can provide this as a command line option with the parameter --ndb-nodeid.

The same is true in writing up the RonDB configuration file that it isn't necessary to provide node ids in the configuration. In this case the management server will select the node ids. Again it is much easier to maintain the cluster if the config file already have node ids assigned to each node and where each node is using a node id when starting up.

A MySQL Server can startup using several node ids. In this case there is a MySQL Server parameter --ndb-cluster-connection-pool-nodeids that can be used to specify a comma separated list of node ids. To use more than one node id for one MySQL server use the parameter --ndb-cluster-connection-pool and set it to the number of node ids to use. The purpose of using more than one node id is that each node id uses one cluster connection and this connection has one socket, one receive thread and one send thread. By using multiple node ids the scalability of the MySQL Server is increased. At the same time one should not use too many. One cluster connection normally scales to around 8-16 x86 CPUs in a MySQL server.

For API nodes the node id is provided when creating an NDB cluster connection object. More about this when going through the NDB API.

RonDB data nodes can only have node ids from 1 to 144. This means that it is a good idea to save node ids for data nodes since we can add data nodes online in MySQL Cluster. Since RonDB limits the total number of nodes to 255, we prefer to limit the number of data nodes to 64 and thus using node id 65 and 66 for RonDB manageent servers and node id 67 and up to 255 are used for MySQL Servers and API nodes.

Starting ndb_mgmd#

Starting the first time#

The first time that one starts the first RonDB management server in a new cluster the RonDB configuration file (config.ini) is used to create the RonDB configuration database. One should first start the RonDB management server where the config.ini file resides. To start this first NDB management server one uses the following command (assuming this management server is node id 65):

ndb_mgmd --config-file=/path/config/config.ini \
         --configdir=/path/to/config_database_directory \
         --ndb-nodeid=65

If a second management server is started, it will get its configuration from the first management server. Thus in order to start up it needs to know where it can find the first management server. This information is provided using the --ndb-connectstring parameter where a textstring is provided as hostname:port. If only a hostname is provided the default 1186 port is used. The second management server would be started using the command (assuming it is node id 66):

ndb_mgmd --ndb-nodeid=66 \
         --configdir=/path/to/config_database_directory \
         --ndb-connectstring=host_ndb_mgmd1

Notice that the initial RonDB configuration database isn't created until all RonDB management servers have started up.

When all management servers have started up in this way they will all have the same configuration stored in the RonDB configuration database.

Notice that --configdir specifies the directory to place the RonDB configuration database whereas cluster log is stored in the directory specified by the DataDir configuration parameter for the management server. It is normally a good idea to use the same directory for both of those things.

It is possible to start the second management server using the same command as for the first. This requires that both the first and the second management server uses exactly the same configuration file. More than two management servers won't work.

The default directory for writing the configuration binary files, is dependent on your build of RonDB. The builds provided by MySQL uses the directory /var/lib/mysql-cluster. It is recommended to always set --configdir when starting the RonDB management server.

The file name of this generated binary file is ndb_NODE_ID_config.bin.SEQ_NUMBER where sequence number is the version number of the RonDB configuration database.

Each time the configuration changes a new file is created with one higher sequential number.

The ndb_mgmd program will ensure that all changes to the RonDB configuration database is the same in all RonDB management server.

In effect the ndb_mgmd maintains a very simple distributed database. It is created the first time one starts by using the --config-file option. This parameters points to the RonDB configuration file.

If the directory already contains a binary configuration file the configuration file will be ignored. If it is desirable to start a new cluster from scratch one can scrap the old configuration and overwrite it with a new configuration using the --initial flag. This flag should not be used to update the configuration in a running cluster.

Starting with --initial will fail if another management server is still running with a different configuration file. So to actually change the NDB configuration database to a completely new value can only happen by restarting all management servers and preferrably also after stopping all data nodes and after that restarting the cluster from scratch.

Restarting the ndb_mgmd#

If a management server fails one restarts it simply by executing the ndb_mgmd command again. Since it still has access to the configuration database it can find the other management servers and can thus ensure that any updates to the configuration database are applied before it starts to ship out configurations to starting nodes.

Changing the RonDB configuration#

In a previous section we showed how to start up with one or several RonDB management servers. This installs the first version of the RonDB configuration database. If a management server stops or crashes it can simply be restarted and it will come up with the same configuration or if another management server have been updated it will get the updated configuration from this management server.

This shows how to create the first configuration and how to keep the RonDB configuration database up and running. It is desirable to be able to update the RonDB configuration database as well.

Here is how an update of the configuration is performed. To update the configuration one starts with the config.ini file. We should save this configuration file properly. A good idea is to store it together with all management servers. If the file is lost it can be recreated by printing the configuration file. This will write a configuration file with all configuration variables displayed.

To update the configuration we edit the configuration file, this means that we can change configuration parameters for existing nodes. We can add new data nodes, new management servers and new MySQL Server and API nodes.

When we have updated the configuration we kill one of the RonDB management servers. The pid of the management server is found in a pid-file in the directory specified by DataDir.

Next we restart it with a reference to the new configuration file and in addition we add the parameter --reload. The command would be

ndb_mgmd --config-file=/path/config/config.ini \
         --configdir=/path/to/config_database_directory \
         --reload \
         --ndb-nodeid=65

When this management server starts it will first start using the old RonDB configuration database. Next it will compare the new configuration file to the old configuration file and will report in the cluster log what changes have been made. Other management server will be updated with the new configuration. If not all management server are up when this management server starts up it will wait until all management servers are up and running.

There is one special case where the --initial flag is needed instead of --reload. This is when you have started with one management server and want to add a second one. In this case one should first stop the running management server after changing the config.ini to include the new management server. Next one starts the management server with the --initial flag. Finally the new second management server is started pointing to the first one. The --initial flag should not be set when starting the second management server.

If for some reason one of the management servers is permanently down, it is possible to start up a management server that doesn't wait to synchronize with all other management servers. The permanently failed management server can be declared using the command line parameter --nowait-nodes. A list of node ids is provided in this parameter. Without this parameter a management server will not start up until all management servers have started up.

Management server that were dead at the time of change will get the new configuration when starting up (but to start it up it is required that the --config-file points to the new configuration file).

If the new configuration contains incompatible changes the configuration change will not be performed and the management server will stop with an error message. The same will happen when the new configuration file contains syntax errors or values out of range.

After this the configuration change is completed. Thus the RonDB configuration database have changed. However nodes will not be informed of this change until they are restarted. To make all nodes aware of the new configuration all nodes in the cluster have to be restarted using a rolling upgrade procedure.

It is a good idea to keep the config.ini file used to change the config around until the next configuration change is needed. It is a good idea to keep it duplicated to avoid having to work with full configuration files that list all entries, also those that have the default value.

If problems occur when changing the config, try stopping both management servers before performing change. Another workaround that solves many issues is to always use --config-file also in second management server.

Special considerations when starting ndb_mgmd#

One special case to handle is when the ndb_mgmd is started on a machine with several network interfaces. In this case it is important that the management server binds to the correct network interface. This can be specified using the parameter --bind-address. Here one specifies the hostname that the management server will bind to.

Normally the management server uses port 1186, if this is used no special configuration item is needed and no command line parameter. If a different port is to be used it is sufficient to change the configuration, the management server will pick up the port number from the configuration. If multiple management server are running on the same host they need to specify node id when they start up.

It is possible to use a my.cnf file to specify the configuration, to do this use the command line parameter --mycnf. No more information on this option is provided in this book. It's mainly used by internal test tools.

The management server is started as a daemon by default.

Starting ndbmtd#

The multithreaded data node uses the ndbmtd binary.

There are two things that normally is needed on the command line when starting up ndbmtd. The first is a reference to the RonDB management server(s). This can either be provided through the environment variable NDB_CONNECTSTRING or by setting the parameter --ndb-connectstring. It is sufficient to use one of the management servers, but for higher reliability it is possible to list all hostnames of all management servers. The connect string is only used at start up to find the configuration. Once the configuration has been fetched all management servers will be connected to the data node. The hostname is normally sufficient in the connect string, but if a non-standard port is used (other than 1186) the port can be provided as hostname:port in a comma separated list.

Second we should specify the node id as described in a previous section. This ensures that we have control over the node id used by the processes running RonDB.

ndbmtd --ndb-connectstring=host1,host2 --ndb-nodeid=1

Normally no more parameters are needed, the remaining parameters needed by the RonDB data nodes is provided through the RonDB configuration as delivered by the RonDB management server.

Data node files#

The RonDB data node will setup its work environment in the data directory as specified in the RonDB configuration parameter DataDir. There will be one file there called:

ndb_NODEID.pid

where NODEID is the node id used by this node. This file contains the PID of the ndbmtd process. This file is used to ensure that we don't start two processes using the same data directory. There will also be a file called:

ndb_NODEID_out.log

This file is called the node log, this will contain the log message specific to this node. This log will contain a fairly detailed description of the progress of node restarts.

If a node failure occurs there will also be a file called:

ndb_NODEID_error.log

This file will contain a description of the errors that have occurred. If several failures occurs it will keep a configurable number of entries of node failures in the error log. This is by default 25.

In a crash there is a set of trace files created. These will be called:

ndb_NODEID_trace.log.ERRORID_tTHREAD_NUMBER

where ERRORID is the number of the error and THREAD_NUMBER is the number of the thread starting at 1. There is also a file called:

ndb_NODEID_trace.log.ERRORID

In ndbmtd this is the main thread, in ndbd it is the only trace file generated since there is only one thread.

The actual data is present in a directory called:

ndb_NODEID_fs

More about the files stored in this directory in a later chapter. We will go through the content of those files in a later chapter.

In debug builds of RonDB there is one more file called:

ndb_NODEID_signal.log

This file is used for special debug messages and will not be described any more in this book. To activate all these debug messages go into the file called SimulatedBlock.cpp and make the method debugOut always return true and then recompile the cluster code. It is nice tool when developing new features in RonDB.

Special considerations when starting ndbmtd#

The above start method is ok for the first start and subsequent starts. There are cases when we want to start from a clean sheet with a clean file system. There are some rare upgrade cases that require a so called initial node restart. There might be rare error cases where this is necessary (e.g. some corruptness in the file system).

RonDB can handle that one or several nodes in a node group starts from a clean file system. However at least one node in each node group need to retain the data in order for the cluster to be recoverable. One method of achieving this clean sheet is to place yourself in the data directory of the data node and issue the command rm -rf * (be careful to ensure that the data node isn't running when performing this command).

This command will wipe away all the files of that data node (except possibly some tablespaces that have been given a file name independent of the placement of the data directory). An alternative method is to use the command line parameter --initial. This will remove all data from the RonDB data directory of the node, except the disk data files (tablespaces and UNDO log files). Removal of disk data files needs to be done manually before starting up an initial node restart.

It is possible to start up a mixed cluster with different number of nodes in each node group. The configuration must have an equal number of replicas in each node group and all these nodes need to be defined in the configuration. However when starting up the initial start of the cluster one can specify exactly which nodes to wait for in the startup. First use the parameter --initial-start and then add the node ids of the nodes you don't want to wait for using the --nowait-nodes parameter. This parameter gets a list of node ids that will not be waited for when starting up the cluster.

This option can be used if you have a cluster with uneven number of nodes available to start up. Say for example that you want to have a cluster with 15 data nodes with 3 replicas, thus 5 node groups of 3 nodes in each node group. But one machine had problems when delivered and you still want to get started in setting up your cluster. You define all 15 nodes in the cluster config, but when starting up with only 14 of them available one uses the above method to ensure that the cluster performs an initial start even without all cluster nodes being available for the initial startup. Each node that starts up have to use the same parameter for this to work.

When the data node starts up it will attempt to connect to the management server. If this fails it will make a number of retries with a certain delay between the retries. The number of retries to perform and the delay between each retry can be set using the parameters --connect-retries and --connect-retry-delay. Default is 12 retries with 5 seconds of delay between each attempt.

When starting up ndbmtd on a machine with multiple network interfaces it is a good idea to use the command line parameter --bind-address to specify the hostname (hostname or IP address) of the network interface to be used by this data node.

The ndbmtd is always started as a daemon on Unix OSs unless specifically told to not start as a daemon using the --nodaemon parameter or the --foreground parameter.

When running ndbmtd as a daemon you get two processes started for each ndbmtd (unless StopOnError is set to 0). The first process started is called the angel process. This angel process starts up the second ndbmtd process, the sole purpose of the angel process is to ensure that the data node can be automatically restarted if the config parameter StopOnError is set to 1. Thus the cluster will not need any manual interaction when a node fails. It will perform the node failure handling and after that it will immediately start the node again from the angel process.

Notice that by default the ndbmtd will use a random port number to listen to. This makes the first configuration very easy to setup. However it makes it hard to setup RonDB in a secure environment with firewalls and so forth. To avoid this problem it is highly recommended to ensure that the RonDB configuration option ServerPort is set to the desired port number to be used by the data node processes. As a suggestion use port number 11860 (easy to remember since 1186 is the default port number used by the NDB management server).

Starting a MySQL Server (mysqld)#

Starting a MySQL Server in RonDB is not very different from starting a MySQL Server using InnoDB or any other storage engine. The only new things are that you need to add the parameter ndbcluster to the command line or to the MySQL configuration (usually named my.cnf). In addition you need to specify the RonDB connect string to the management server such that you get the configuration of the cluster to your node(s). This works the same way as for ndbmtd and ndb_mgmd in that you specify the parameter --ndb-connectstring and set this equal to a list of hostnames where management server(s) resides.

Most parameters in the MySQL Server can either be set as command line parameter or as configuration parameters in a MySQL configuration file. Personally I prefer to use only command line parameter, most of the descriptions here assume using command line parameters, but it will work just as well to use the configuration files. My problem is simply that I run too many MySQL Servers on one box and thus using standard locations for configuration files won't work very well for me.

There are a great variety of parameters that can be configured when running the MySQL Server using NDB as the storage engine. In this section we will focus on just the most important ones and let the advanced ones be covered in a later chapter on configuration handling.

In most installations it is sufficient to have one cluster connection, for MySQL Servers running on larger servers it might be necessary to use more than one cluster connection. A good rule of thumb is to use about one cluster connection per 8-16 x86 CPUs. On a larger server with 16 cores one might need 4 cluster connections.

The RonDB uses one cluster connection per 8 x86 CPUs, the managed RonDB service will never use MySQL Servers with more than 32 CPUs. There is no specific gain in using very large MySQL Servers.

To be able to know exactly which nodes that are tied to this MySQL Server one should specify the node ids to use when starting the MySQL Server. It is not necessary from an operational point of view, but it helps in managing the MySQL Server. For this one sets the parameter --ndb-cluster-connection-pool to the number of cluster connections you will use and set the list of node ids to use for this MySQL Server in the --ndb-cluster-connection-pool-nodeids parameter.

Starting up a large MySQL Server could look something like this:

mysqld --ndbcluster \
  --ndb-connectstring=ndb_mgmd_host1 \
  --log-error=/path/to/error_log_file \
  --datadir=/path/to/datadir \
  --socket=/path/to/socket_file \
  --ndb-cluster-connection-pool=4 \
  --ndb-cluster-connection-pool-nodeids=67,68,69,70 &

Note the ampersand sign at the end. The MySQL Server will run as a background thread if you issue this ampersand. This is required to get back the terminal after starting the MySQL Server. The above command assumes that you already initialised the data directory of the MySQL server.

To initialise the data directory for a test run you can use the above command with an added parameter --initialize-insecure. This command will perform a bootstrap of the data directory and exit. After this command you can run the above command to start the MySQL server. To initialize a secure MySQL server use the --initialize command parameter instead. In this case a random password is generated that you will have to change when connecting to the MySQL server the first time.

Setting the data directory, setting the error log file, setting the socket file are minimal things required to get a proper MySQL server started in a test environment. There is a lot more to starting MySQL servers that I will not go into any details about here. There are many books and there is a MySQL reference manual that covers this in great depth. In a later chapter I will go through the configuration settings that will be useful when starting a MySQL server to be used with RonDB.

Starting ndb_mgm#

ndb_mgm is a tool that can be used to send commands to the RonDB management server. To run this tool we only need to know how to connect to the management server. This uses the normal --ndb-connectstring parameter. In Unix one can set the environment variable NDB_CONNECTSTRING to specify the whereabouts of the management server.

The most commonly used command in the management client is SHOW. This command shows the status of the cluster nodes.

The management client is the tool that is used to start up a backup and report on the progress of the backup.

It is possible to gracefully shut down RonDB data nodes and RonDB management servers. It is not possible to shut down API nodes and MySQL Servers.

Another common command used here is to check memory usage status.

There is a command called DUMP, first one specifies the node to send the command to, ALL means all nodes. Next one specifies the DUMP code and after that one can send a set of parameters to the DUMP command code. This DUMP command is sent to the proper data node and each DUMP code represents a different command. Most of these commands will print various debug information to the cluster log. These commands are mostly used in situations when trying to figure out why cluster is not working correctly. At times working with the Hopsworks support team the customer can be asked to execute some of those commands.

More details on these commands in a later chapter.

bind-address#

If there are issues to get programs to communicate one can consider setting --bind-address to ensure that the program listens to connections on the desired communication channels.