Basic configuration setup#
When you look at the forums it is quite obvious that the absolute majority of the NDB users are looking for a very basic high availability setup with one VM per node in the cluster.
We will look at setting up the configuration for such a basic cluster as an exercise now. The previous chapters have given us a lot of input on what parameters that matters the most. We want to setup a memory efficient setup that can handle some load, but not the very highest load. We are not aiming for the absolute highest performance here, it is enough to setup things for a basic user that wants a very basic HA setup.
The most basic HA setup has 5 VMs, one for the RonDB management server, two VMs for the data nodes and two VMs for the MySQL servers. In addition we will add one API node for use by RonDB tools.
We assume that the management server is using IP address 192.168.1.100, the data nodes are setup on 192.168.1.101 and 192.168.1.102 and the MySQL servers are using 192.168.1.103 and 192.168.1.104.
As preparation for setting up the cluster we assume that the VMs have opened up port 1186 to communicate to and from the NDB management server, 3306 to communicate to and from the MySQL servers and port 11860 to communicate to and from the data nodes. This is particularly important to consider when the OS by default closes down all ports.
When setting up things in the cloud it is very common that one can define the ports to open up, in a cloud setup almost all VMs open up port 22 for SSH access, but in addition we need port 1186, 3306 and 11860 opened up here for the VMs.
In /etc/my.cnf we store the configuration used to startup the MySQL server nodes. In this file we need to ensure that the MySQL server can use the NDB storage engine and we need to ensure that it can connect to the management server.
In the RonDB configuration file we need to create 8 nodes, the management server with node id 65, the two data nodes with node id 1 and 2 and the two MySQL servers with node id 67 and 68. In addition we provide an API node that is using node id 231. This node is not bound to any specific host. We will add one inactive RonDB management server and one extra RonDB data node. These nodes are not possible to start since they are configured with NodeActive=0. See a chapter on adding replicas to see how to actually make use of those nodes.
The minimum size of the memory to execute data nodes using this configuration is around 8 GBytes. See the chapter on RonDB and Docker to see how to create a smaller configuration using automated memory and automated thread configurations. This configuration will use all CPUs in the VM and all the memory in the VMs.
[ndbd default] NoOfReplicas=3 DataDir=/usr/local/ndb_data ServerPort=11860 HeartbeatIntervalDbDb=1000 ODirect=1 IndexStatAutoCreate=1 IndexStatAutoUpdate=1 StringMemory=4 [ndb_mgmd default] DataDir=/usr/local/ndb_data [ndb_mgmd] NodeId=65 Hostname=192.168.1.100 [ndb_mgmd] NodeId=66 Hostname=192.168.1.100 NodeActive=0 [ndbd] NodeId=1 Hostname=192.168.1.101 [ndbd] NodeId=2 Hostname=192.168.1.102 [ndbd] NodeId=3 Hostname=192.168.1.102 NodeActive=0 [mysqld] NodeId=67 Hostname=192.168.1.103 [mysqld] NodeId=68 Hostname=192.168.1.104 [api] NodeId=231
This configuration will allow us to create 20300 table and index objects with up to 500.000 columns in the tables. This should be sufficient for most use cases.
Setting NoOfReplicas to 3 is required since we add the not active nodes.
We need to set DataDir to ensure that the placement of NDB files is set.
Using ODirect is preferrable in most file systems so we set it here.
[mysqld] ndbcluster ndb-cluster-connection-pool-nodeids=67 ndb-connectstring=192.168.1.100 [ndbd] connect-string=192.168.1.100 [ndb_mgmd] connect-string=192.168.1.100 config-file=/usr/local/ndb_data/config.ini nodeid=65 [ndb_mgm] connect-string=192.168.1.100
Whether to set the node id of the MySQL server, data nodes and management server in the my.cnf is a matter of choice. Here I have done so, but this means that the file must be changed in each VM. Otherwise one can provide the node id in the startup command.