RonDB and Docker#
There is numerous reasons why one may want to use Docker to run RonDB. One such reason to use Docker is that removes the need to worry about any other MySQL installations.
The Docker image is completely independent of the other binary installations on the machine since it carries a minimal Linux implementation. In this sandbox you can create a minimal and sufficient environment to run your applications.
Yet another use case is to use Docker to create a VPN network where the cluster access is completely controlled within Docker.
The official RonDB docker containers are found at:
In order to use this you should first use the docker pull command to fetch the proper version of RonDB docker scripts. The below fetches the latest version of RonDB.
docker pull mronstro/rondb
To fetch a specific version e.g. RonDB 21.04 use instead the command:
docker pull mronstro/rondb:21.04
Currently you can choose version 21.04 and version 21.10, the stable version is the 21.04 version.
These docker instances uses a slim installation of Oracle 8.
The docker container expects a volume to be provided that will be using /var/lib/rondb internally in the Docker container, for MySQL Server instances one should instead provide a volume for /var/lib/mysql.
There is a simple my.cnf provided at /etc/my.cnf and a simple cluster configuration file at /etc/rondb.cnf.
Here is the content of the RonDB configuration files.
[ndbd default] NoOfReplicas=3 ServerPort=11860 DataDir=/var/lib/rondb TotalMemoryConfig=3G MaxNoOfTables=512 MaxNoOfAttributes=8000 MaxNoOfTriggers=20000 TransactionMemory=300M SharedGlobalMemory=500M [ndb_mgmd] NodeId=65 hostname=192.168.0.2 DataDir=/var/lib/rondb [ndb_mgmd] NodeId=66 hostname=192.168.0.3 NodeActive=0 [ndbd] NodeId=1 hostname=192.168.0.4 [ndbd] NodeId=2 hostname=192.168.0.5 [ndbd] NodeId=3 hostname=192.168.0.6 NodeActive=0 [mysqld] NodeId=67 hostname=192.168.0.10 [mysqld] NodeId=68 hostname=192.168.0.10 [mysqld] NodeId=69 hostname=192.168.0.10 [mysqld] NodeId=70 hostname=192.168.0.10 [mysqld] NodeId=71 hostname=192.168.0.11 [mysqld] NodeId=72 hostname=192.168.0.11 [mysqld] NodeId=73 hostname=192.168.0.11 [mysqld] NodeId=74 hostname=192.168.0.11
In this configuration file we expect 2 nodes to run that contain replicas of each other. One can easily add a third replica. The data node will use 3 GByte of memory, thus Docker will require at least 8 GByte of memory to run the cluster. When testing this on Docker using Mac OS X we see that the data nodes will be killed by Docker using SIGKILL if not enough memory has been given to Docker. Thus a minimum of 8 GByte of memory should be provided for a default setup of RonDB in Docker.
It is also possible to easily add a second RonDB management server in this configuration.
We have provided 8 API slots in two hosts. This means that we can run up to 8 MySQL Servers in those 2 hosts. We can also run 2 MySQL Servers using 4 cluster connections in each host. The API slots can also be used to run NDB API applications, ClusterJ applications and NodeJS applications.
Below is the my.cnf file that is used by default by the nodes when they start up.
[mysqld] ndbcluster ndb-connectstring=192.168.0.2 user=mysql [mysql_cluster] ndb-connectstring=192.168.0.2
The docker container has an entrypoint script that can start an RonDB management server, an RonDB data node, a MySQL Server, an RonDB management client and any other program in the RonDB installation.
In order to use your own my.cnf you should insert the following part in your Docker startup command:
and similarly to replace the cluster configuration file add the following:
To use your own data directory you should map your own data directory to the internally used /var/lib/rondb. This is done again using the -v switch as shown here:
For MySQL Servers one should instead replace the directory /var/lib/mysql.
The configuration database is placed in the directory /var/lib/rondb as well.
For users on systems with SELinux enabled it might be necessary to allow the Docker container to mount the external files and volumes. This can be done using the command:
chcon -Rt svirt_sandbox_file_t /path/to/file and chcon -Rt svirt_sandbox_file_t /path/to/dir
Using Docker to avoid installation problems#
In the case where you simply want to use the Docker container as a solution to the problem of installing MySQL Cluster. In this case no installation of MySQL Cluster is necessary, it all happens inside Docker. In this case you want to use your own set of configuration files, your own data directory and your own networking.
You start a NDB management server by using the following command:
docker run -d \ --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/rondb.cnf:/etc/rondb.cnf \ -v /path/datadir:/var/lib/rondb --name=mgmt1 \ mronstro/rondb ndb_mgmd --ndb-nodeid=65
If you want additional parameters to the startup call you can add those at the end. E.g. if you want to add --initial or --reload to the startup of the management server or as in example above --ndb-nodeid=65. Given that our default configuration contains two RonDB management servers it is required to provided the parameter --ndb-nodeid=65.
There is a similar command for starting up NDB data nodes where ndb_mgmd is replaced by ndbd or ndbmtd and the mapping of the cluster configuration file is removed and changing the name of the container. When starting the data node containers it is necessary to specify the node id.
docker run -d \ --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/rondb \ --name=ndbd1 \ mronstro/rondb ndbmtd --ndb-nodeid=1
One special thing here is that the node log, both from the management server and the data nodes is piped to stdout in the docker container. To read the node logs you can use the following command (this one reads the node log the first data node).
docker logs ndbd1
To run a MySQL Server replace ndbmtd by mysqld and change the name of the container. We set the node id of the MySQL server using --ndb-cluster-connection-pool-nodeids.
docker run -d \ --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/mysql \ --name=mysqld1 \ mronstro/rondb mysqld --ndb-cluster-connection-pool-nodeids=67
Using Docker on a single host#
In the single host case you can use the default Docker bridge network or define a separate Docker network for your tests. The command to create such a new Docker network is very simple:
docker network create mynet --subnet=192.168.0.0/16
After this you can use the IP addresses in the range 192.168.0.0 to 192.168.255.255 for your Docker network. The default configuration file is intended for this use case.
docker run -d --net=mynet \ --name=ndbd1 \ --ip=192.168.0.4 \ mronstro/rondb ndbmtd --ndb-nodeid=1
Above is the command to start an NDB data node and connect it to the Docker mynet network.
It is not necessary to use volumes in this case. The benefit of volumes in this case is that it makes it easier to check any logs. It is always possible to check logs in a running container, e.g. to check logs in the ndbd1 container one can log into the container using this command:
docker exec -it ndbd1 /bin/bash
Using this network it is easy to run one command per node you want to start in the cluster. Obviously it is as easy to tear down this network and there will be no sign of any RonDB installation afterwards. This is good for testing RonDB in a sandboxed environment.
Using Docker overlay network#
In order to run Docker containers using a Docker overlay network requires the use of key-value store. Docker supports a number of different key-value stores such as Consul, etcd and ZooKeeper.
One thing to remember when using a key-value store to handle the Docker network is that you have created a dependency on this key-value store to be up and running for your cluster to operate. If you are aiming for the highest availability this might not be desirable.
To start a MySQL Server in an overlay network called my_overlay_net is done using the command:
docker run -d --net=my_overlay_net \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/mysql \ --name=mysqld1 \ --ip=192.168.0.10 \ mronstro/rondb mysqld --ndb-cluster-connection-pool-nodeids=67
Setting up a Docker network is documented in the Docker documentation. Once the network have been setup and the key-value store have been started, it is possible to start Docker containers using this network in exactly the same fashion as you did for a local docker bridge network.
Using Docker Swarm#
Docker Swarm is another method to build Docker networks. Currently this doesn't support RonDB since it cannot use static IP addresses on containers. There is some discussion on the NDB forum on how to overcome this problem.
Setting the password on MySQL Server container#
The MySQL Server is started in secure mode. You need to discover the password and change it. To do this (assuming that the MySQL Server container is called mysqld1) uses the command:
docker logs mysqld1 2>&1 | grep PASSWORD
Next you use this password to connect a MySQL client to the MySQL Server using the command (the MySQL client will run as localhost in the same Docker container as the MySQL Server):
docker exec -it mysqld1 mysql -uroot -p
After the command has executed you will be asked for the password and after that you are connected to the MySQL Server. Now it is time to change the password. Use this MySQL command to do this:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'NewPassword';
It is not possible to execute any other command in the MySQL client until the password has been changed.
After changing the password you are ready to play around with RonDB using the MySQL client and you can create a database, create tables, execute queries on tables and all the other things one can do with MySQL.
Actually another, simpler variant to start a MySQL Server is to use the following command:
docker run -d --net=my_overlay_net \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=your_password \ --name=mysqld1 \ --ip=192.168.0.10 \ mronstro/rondb mysqld --ndb-cluster-connection-pool-nodeids=67
This means that you immediately create your own password and there is no requirement to change it after connecting and you can get quicker down to the business of trying RonDB out.
Running the RonDB management client#
To run the RonDB management client requires using an interactive Docker terminal. The easiest manner to do this is to connect the terminal to the running RonDB management server using the following command.
docker exec -it mgmt1 ndb_mgm
After this you can verify that the cluster is up and running by running the show command.
Using Docker to limit memory resources#
If we want to run several cluster programs on the same machine or VM it is useful to set limits on the memory usage of each program. The RonDB data node consumes quite large memory resources whereas other programs usually use a lot less.
As an example we might want to colocate an RonDB data node and a MySQL Server on the same machine. The MySQL Server have no limit on how much memory it could use. By limiting the memory usage of the MySQL Server we can protect the RonDB data node from a run away MySQL Server while still running in the same computer or VM.
Docker has this capability, when starting a Docker container it is possible to limit its memory usage.
If we want to run a RonDB data node combined with a MySQL Server and we have a machine with 32 GByte of memory, we can start them up in the following manner:
docker run -d --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/mysql \ --name=mysqld1 \ --ip=192.168.0.10 \ --memory=4G \ --memory-swap=4G \ --memory-reserve=4G \ mronstro/rondb mysqld --ndb-cluster-connection-pool=67 docker run -d --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/rondb \ --name=ndbd1 \ --ip=192.168.0.4\ --memory=24G \ --memory-swap=24G \ --memory-reserve=24G \ mronstro/rondb ndbmtd --ndb-nodeid=1
These two programs are now limited to using at most 28 GByte of memory and they cannot steal memory from each other (leaving 4G of memory for OS and other programs). This can be a good way of testing how RonDB behaves in an environment with a certain memory size.
Using Docker to limit CPU resources#
In RonDB we have the ability to control on a detailed level which CPUs that a certain thread is using. This is especially true for the RonDB data node. If we want to ensure that two programs run on different CPUs we can again use Docker to do exactly that.
We expand on the previous examples and assuming that we run on a machine with 8 CPU cores with 16 CPUs we could start the machines as below.
docker run -d --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/mysql \ --name=mysql1 \ --ip=192.168.0.10 \ --memory=4G \ --memory-swap=4G \ --memory-reserve=4G \ --cpuset-cpus=0-3,8-11 \ mronstro/rondb mysqld --ndb-cluster-connection-pool=67 docker run -d --net=host \ -v /path/my.cnf:/etc/my.cnf \ -v /path/datadir:/var/lib/rondb \ --name=ndbd1 \ --ip=192.168.0.4\ --memory=24G \ --memory-swap=24G \ --memory-reserve=24G \ --cpuset-cpus=4-6,12-14 \ mronstro/rondb ndbmtd --ndb-nodeid=1
Here we have dedicated 24 GByte of memory and 3 CPU cores to the NDB data node. We have dedicated 4 GByte of memory and 4 CPU cores to the MySQL Server. This means we left 4 GBytes of memory and 1 CPU core dedicated to the OS and other programs.
In this manner we can use Docker to get a controlled run-time environment. Thus Docker can be used both to get a controlled environment for installation as well as for as execution. In both of those examples we are running the RonDB programs directly on top of the host file system and on the host networking. We're using Docker to not having to worry about installation of RonDB programs and we also use Docker to ensure that the MySQL Server and the RonDB data node can run concurrently without interfering with each other on the same host other than using the same disks and network.
Using Docker and cgroups to control CPU and memory#
Instead of using Docker parameters it is possible to connect a Docker container to a cgroup. This means that all the resource constraint that are possible using cgroups are available for configuration.
Probably the most important thing here is that it makes it possible to allocate CPU resources exclusively to a Docker container, cgroups is the only method of exclusively locking CPU resources in Linux.
The Docker parameter for this is --cgroup-parent=cgroup_name.