Skip to content

Probing the Cluster#

Probing a cluster and therefore detecting a cluster failure can be complex due to partial failures. Since RonDB contains multiple components, one can divide failures into different levels, increasing by severity:

  1. Management servers (MGMds) are down

    The management server is a lightweight process that does not persist state except for cluster configuration and cluster logs. Therefore, spinning it up again is very fast. However, whilst the MGMds are down, it is not possible to check whether (or at least which) the data nodes are working correctly.

  2. Query servers (e.g. MySQL servers) are down

    Query servers also tend to be stateless and therefore spinning them up again will usually take around 5-30s. This can increase significantly with many tables. Unavailable query servers render the cluster useless but do not define a cluster failure. Fortunately, it is possible to have many independent query servers running at the same time.

  3. All data nodes of one node group are down

    If this happens, all data nodes will go down. A data node persists its data to disk via the REDO log. If this is persisted across data node restarts, the data node can start up again. Spinning up a data node will take considerably more time than query servers since it needs to load data into memory.

  4. Data nodes are out of memory or disk space

    Generally, a data node will only use the memory that is allocated to it from the start. Therefore, the process should only ever be killed by the OS due to OOM at data node startup. However, this allocated memory may at some point not be enough anymore. The same concerns disk space. If this is the case, the database will become read-only. The cluster is therefore not down, but if scaling is not planned for, manual scaling of the cluster (allocating VMs, etc.) may take significant time. This in turn causes application downtime.

  5. Data is missing or inconsistent

    This is perhaps the most tricky scenario. It is also difficult to test for - if it occurs, it is a bug in RonDB. If running into this scenario, it is recommended to create a backup and restore the backup on a new cluster.

There are two basic methods to check if a cluster is operational: via the MySQL server or the RonDB management server (MGMd).

Probing order#

Generally, we want to be aware of partial failures across all of RonDB’s components. This can be done most effectively by running probes in the following order:

  1. Insert data via the MySQL Servers

    This is most likely done on the application level. If the data nodes are out of REDO log, we will receive an error code 410. This can be detected by an update query.

  2. Query MySQL servers for available data memory

    This can be done using the following query:

    SELECT node_id, resource_name, used, max
        FROM ndbinfo.resources WHERE resource_name = 'DATA_MEMORY'";
    

    This will first of all check whether the MySQL server is queryable. If not, one can try another MySQL server. The query will return the used and maximum/total data memory for each data node. Note that the unit is pages, whereby one page contains 32768 bytes. If the used memory is close to the maximum memory, we are running out of data memory for the given data node.

  3. Query MySQL server process status

    This can be done using the mysqldadmin binary:

    mysqldadmin -hlocalhost ping
    

    This will even return an exit code 0 if the MySQL server is running and no user or password is supplied (returning Access Denied).

  4. Check cluster status via RonDB management client

    The SHOW command in the RonDB management client reports cluster status as seen by the contacted management server.

    ndb_mgm> SHOW
    Cluster Configuration
    ---------------------
    [ndbd(NDB)] 2 node(s)
    id=1 @192.168.1.9 (RonDB-22.10.5, Nodegroup: 0, *)
    id=2 (not connected, accepting connect from 192.168.1.10)
    [ndb_mgmd(MGM)] 1 node(s)
    id=65 @192.168.1.8 (RonDB-22.10.5)
    id=66 @192.168.1.9 (RonDB-22.10.5)
    [mysqld(API)] 2 node(s)
    id=67 @192.168.1.9 (RonDB-22.10.5)
    id=68 @192.168.1.10 (RonDB-22.10.5)
    id=69 (not connected, accepting connect from 192.168.1.9)
    id=70 (not connected, accepting connect from 192.168.1.10)
    

    At least one data node (ndbmtd) per node group should be listed as connected for the cluster to be considered up. Otherwise, it’s down.

    As described earlier, the MGMd could be down as well. In this case, it is best to retry until this MGMd is up again (using the same hostname).