Skip to content

Managing MySQL Servers#

Running multiple distributed MySQL servers may require a more complex setup. Generally, it is however similar to the Local Quickstart Tutorial.

Basic Configuration#

In the local quickstart, we only used CLI parameters to start up the MySQL server. However, in a more advanced setup, it can make sense to place the MySQLd configurations into a separate file, usually called my.cnf. Here, we’ll be using a minimal my.cnf file. For the record, the documentation of MySQLd configurations is best found in the MySQL reference manual.

For our recollection, we had specified two MySQL servers for the hosts mysqld1_hostname and mysqld2_hostname, each with two node slots, in our config.ini file.

To make sure that our MySQLds use the node slots assigned to them and not API slots with empty hostnames, we will specify the ndb-cluster-connection-pool-nodeids parameter in the my.cnf file. However, this also means that each host will require a different my.cnf file.

A basic my.cnf file for the MySQLd on mysqld1_hostname would look as follows:

[mysqld]
datadir         = /usr/local/rondb_data/mysql_data
port            = 3306
socket          = /usr/local/rondb_data/mysql.sock
bind-address    = 0.0.0.0

ndbcluster
default-storage-engine = ndbcluster

ndb-cluster-connection-pool             = 2
ndb-cluster-connection-pool-nodeids     = 67,68

[mysql_cluster]
ndb-connectstring   = mgmd1_hostname,mgmd2_hostname

# This is used for the MySQL client
[client]
socket              = /usr/local/rondb_data/mysql.sock

Starting Procedure#

The procedure to start MySQL servers is identical to starting MySQL InnoDB and therefore one can also refer to the official MySQL documentation for this.

Some of the following steps should only be run on one MySQLd, preferably the first one. The other steps can be run on every MySQLd.

Apart from MySQL Views, Stored Procedures and Triggers, the MySQL server is generally a stateless client. This means that removing local data between restarts is acceptable.

  1. On every MySQLd, always: Make sure the configuration is valid:

    mysqld  --defaults-file=<path/to/my.cnf> \
            --validate-config
    
  2. On every MySQLd, always: Initialise the data directory:

    mysqld  --defaults-file=<path/to/my.cnf> \
            --initialize-insecure
    
  3. On one MySQLd, once: Run a MySQL server which is only available locally via its socket:

    mysqld  --defaults-file=<path/to/my.cnf> \
            --skip-networking \
            --daemonize
    
    # Run this until the MySQL server is up
    mysqladmin --socket=/usr/local/rondb_data/mysql.sock ping
    
  4. On one MySQLd, once: Open the MySQL client via the local root user:

    mysql \
        -uroot \
        -hlocalhost \
        --protocol=socket \
        --socket=/usr/local/rondb_data/mysql.sock
    
  5. On one MySQLd, once: Set root user password, create other MySQL users, create user privileges, etc.

  6. On one MySQLd, once: Stop the MySQL server:

    mysqladmin \
        -uroot \
        --password=<new-root-password> \
        --socket=/usr/local/rondb_data/mysql.sock \
        shutdown 
    
  7. On every MySQLd, always: Start the MySQL server as a daemon:

    mysqld  --defaults-file=<path/to/my.cnf> \
            --daemonize
    

An alternative to using this procedure is to use the --initialize command parameter instead of --initialize-insecure. In this case, a random password is generated that one will have to change when connecting to the MySQL server for the first time.

Views, Stored Procedures and Triggers#

RonDB supports Views, Stored Procedures and Triggers. However, these are not replicated across MySQL servers. Therefore, one has to make sure that these are created on all MySQL servers that one wishes to use them on. All other schemata will be replicated.

Resilient Starts#

Similar to the data nodes, one may want to implement a waiting mechanism for the MySQL servers to start up. Instead of only waiting for the management server to start up, one must now wait for the entire cluster to start up. This can be done using the ndb_waiter binary. This will exit once the cluster has started:

ndb_waiter --ndb-connectstring=mgmd1_hostname,mgmd2_hostname

A more granular method is to use the ndb_mgm binary to check the cluster status.