Local Quickstart#
In this quickstart, we will run a RonDB cluster on a local machine. We will verify that it is running and run some MySQL queries against it.
Requirements#
The following requirements must be fulfilled to run this quickstart:
-
Linux machine (tested on Ubuntu 20.04, 22.04 & Red Hat-compatible 8+)
-
RonDB installed (22.10 recommended)
-
8-10GiB of free memory (only 5GiB if using a single data node)
-
3-4 CPU cores
Create a Configuration File#
We will first define our cluster configuration. We want a cluster with the following setup:
-
1 management server
-
2 data nodes (1 node group and replication factor 2)
-
1 MySQL server
The management server will be used to load the cluster configuration file. When running a production cluster on a single machine, using both multiple replicas or multiple node groups does not make sense. In our case, we are simply demoing the usage of replicas. In a distributed cluster, we recommend using 2 or 3 replicas per node group. Node groups can be added later on to grow database storage capacity.
Since RonDB is an in-memory database, which is usually run with one data
node (and only a data node) per host, we will have to be more careful
with the memory and CPU configuration. By default, a data node will
allocate all available memory and apply adaptive spinning to all
available CPUs. To restrict allocated resources, we can use the
parameters TotalMemoryConfig
and NumCPUs
.
In accordance with the requirements mentioned earlier, we will therefore run a minimal cluster setup, which allocates the following resources per node type:
-
Management Server: 10MiB memory, 0.1 CPU
-
Data Node: 3-4GiB memory, 1 CPU
-
MySQL Server: 1GiB memory, 2 CPU
For the sake of simplicity, we will use a single directory for all data files of all RonDB services. In production, this can be subject to many optimizations, especially when using local NVMe drives.
The cluster configuration file, i.e. our config.ini
, is shown in the
following. The notation for the management server is ndb_mgmd
, for
data nodes it is ndbd
and for MySQL servers it is mysqld
. The
appended d stands for daemon, i.e., a process that runs in the
background.
[ndb_mgmd]
NodeId=65
Hostname=127.0.0.1
DataDir=/usr/local/rondb_data
[ndbd default]
NoOfReplicas=2
DataDir=/usr/local/rondb_data
# Restricting CPU usage
NumCPUs=1
# Minimising memory allocation
TotalMemoryConfig=3G
RedoBuffer=16M
ReplicationMemory=50M
SchemaMemory=200M
TransactionMemory=150M
SharedGlobalMemory=150M
[ndbd]
NodeId=1
[ndbd]
NodeId=2
[mysqld]
NodeId=67
[mysqld]
NodeId=68
[api]
NodeId=69
Note that the parameter Hostname
is only used for the management
server. For data nodes, the default is localhost
and for API nodes
(RonDB clients) the default is empty. Empty hostnames for API nodes are
generally considered an unsafe practice since they allow anybody with
network access to connect to the cluster. Data nodes and management
servers will also use Hostname
as the binding address of their server.
For any type of API slots (including MySQL servers), each slot specifies one connection. Our configuration will allow our MySQL server to use two connections. It can therefore scale to a higher number of CPUs.
Start Cluster#
First, we start the management server, which loads the cluster configuration file.
In contrast to the previously specified DataDir
, the --configdir
flag is used to specify the directory in which the management server
will store its configuration database. This is essentially the
config.ini file in a binary format. If other management client commands
were used to alter the cluster configuration, these changes would also
be reflected in this file.
If any configuration parameter is incorrect, the management server will
fail to start. If this happens, check the node log file
/usr/local/rondb_data/ndb_65_out.log
.
Next, we’ll be starting the data nodes. This works as follows:
ndbmtd --ndb-connectstring=127.0.0.1 --ndb-nodeid=1
ndbmtd --ndb-connectstring=127.0.0.1 --ndb-nodeid=2
Note that this should be done in parallel since a cluster will not start until at least a partial cluster can be run. This means that every node group must have at least one data node running and at least one node group must have all data nodes running.
Once again, the node log files /usr/local/rondb_data/ndb_1_out.log
and
/usr/local/rondb_data/ndb_2_out.log
can be checked for errors.
Since we are now already running cluster interactions, further errors
can also be detected in the management’s cluster log:
/usr/local/rondb_data/ndb_65_cluster.log
Checking Cluster Status#
Now, we technically already have a running cluster. This can be verified
by running the show
command via the management client. This will use
the api
slot which was specified in the config.ini file earlier to
connect.
This will output something like the following:
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=1 @127.0.0.1 (RonDB-22.10.1, Nodegroup: 0, *)
id=2 @127.0.0.1 (RonDB-22.10.1, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=65 @127.0.0.1 (RonDB-22.10.1)
[mysqld(API)] 3 node(s)
id=67 (not connected, accepting connect from any host)
id=68 (not connected, accepting connect from any host)
id=69 (not connected, accepting connect from any host)
As one can see, we have 2 data nodes and 1 management server running. However, what one can also see, is that we do not have a MySQL server running yet. This is what we will do next.
Starting MySQL Server#
MySQL servers can be configured extensively using an ini configuration file. There are a handful of books and an extensive MySQL reference manual which can be consulted for more details. For the sake of brevity, we will only use a few CLI commands:
mysqld --ndbcluster \
--ndb-connectstring=127.0.0.1:1186 \
--datadir=/usr/local/rondb_data/mysql_data \
--log-error=/usr/local/rondb_data/mysql-error.log \
--socket=/usr/local/rondb_data/mysql.sock \
--ndb-cluster-connection-pool=2 \
--initialize-insecure
This will bootstrap the MySQL server as a foreground process and exit
once finished. When this is done, we can start the MySQL server as a
background process. The following assumes that we are running with the
Unix root
user.
mysqld --ndbcluster \
--ndb-connectstring=127.0.0.1:1186 \
--datadir=/usr/local/rondb_data/mysql_data \
--socket=/usr/local/rondb_data/mysql.sock \
--log-error=/usr/local/rondb_data/mysql-error.log \
--ndb-cluster-connection-pool=2 \
--user=root \
--daemonize
This will start a MySQL server using the two connections to the cluster
that we had specified in the config.ini
file. Both these two commands
should take less than a minute to initialize.
Error messages can be checked in the log file
/usr/local/rondb_data/mysql-error.log
.
Checking MySQL Status#
The created MySQL server will be initialized passwordless for the
default MySQL user root
. Therefore, we can connect to it using the
MySQL client with the following command:
If everything is working, we will then be able to access the ndbinfo
database, which provides information about the RonDB cluster:
mysql> USE ndbinfo;
mysql> SHOW TABLES;
mysql> SELECT * from cpustat;
+---------+--------+---------+-----------+---------+-------------+-----------------+-----------------+-------------+--------------------+--------------+
| node_id | thr_no | OS_user | OS_system | OS_idle | thread_exec | thread_sleeping | thread_spinning | thread_send | thread_buffer_full | elapsed_time |
+---------+--------+---------+-----------+---------+-------------+-----------------+-----------------+-------------+--------------------+--------------+
| 1 | 0 | 2 | 2 | 96 | 3 | 97 | 0 | 0 | 0 | 1032096 |
| 2 | 0 | 2 | 2 | 96 | 3 | 97 | 0 | 0 | 0 | 1006971 |
+---------+--------+---------+-----------+---------+-------------+-----------------+-----------------+-------------+--------------------+--------------+
2 rows in set (0.01 sec)
The MySQL server is now connected to the RonDB cluster and can access information about the data nodes. The cluster is now fully set up!