Skip to content

Up-/Downgrading RonDB#

In this chapter we will describe how to upgrade a RonDB standalone installation. We will also describe what you need to do to upgrade a Hopsworks installation.

First we need to distinguish between upgrades to a new version within the RonDB 21.04 or within the 22.10 series. These should be uncomplicated, they are mostly fixing bugs in earlier RonDB versions.

Upgrading from one major RonDB version to another could involve additional requirements on the software change procedure that will be described separately.

General approach to Upgrading RonDB#

In this chapter we will walk through the steps required to upgrade the RonDB cluster from one RonDB version to another. This expects that all nodes in the RonDB cluster is upgraded.

Actually in many cases an upgrade is only made to avoid a specific bug and in this case it might be enough to upgrade a part of the RonDB cluster. E.g. as an example an upgrade from RonDB 21.04.6 only fixes bugs present in the RonDB management server and in the RonDB data node. Thus there is no immediate need to upgrade the API nodes and MySQL Servers in this specific case.

Upgrade RonDB management server(s)#

When upgrading RonDB one should always follow the same pattern of node changes. At first one should always upgrade the RonDB management server(s) followed by the data nodes and finally the various API nodes (including the MySQL Servers).

Communication between RonDB data nodes and API nodes has always followed a scheme where one have to check what version the receiver is using before using a new feature. This means that one can expect upgrades to work independent of the order between RonDB data nodes and API nodes and MySQL Server.

When performing an upgrade within the RonDB 21.04 series, it is usually going to work with more or less any order, even starting the RonDB management server last of all. The reason is that there has been no major changes to the configuration within the RonDB 21.04 series.

However following the order of management server, followed by data nodes and next API nodes is still the recommended order.

One should upgrade one management server at a time. Upgrading one management server involves three steps. First stop the management server, next ensure that we use the new RonDB binaries and finally start up the RonDB management server with the new software version.

Upgrade RonDB Data nodes#

The next step is to upgrade the RonDB data nodes. We want the upgrade to be an online operation, thus we cannot bring down all RonDB data nodes at once. There are two approaches to this upgrade. Either upgrade one RonDB data node at a time, this is the safest approach, but in a large cluster it could consume too much time.

A faster approach is to upgrade one RonDB data node per node group. The show command in the management client shows which node group the data node belongs to. If the cluster has only one node group this approach is the same as restarting one node at a time.

Upgrading a RonDB data node goes through the same three steps, first stop the node, second upgrade the software links to point to the new software version and finally start up the node using the new software version.

Upgrade RonDB MySQL Servers#

The next step is to upgrade the MySQL Servers in the RonDB cluster. It is important to upgrade all the MySQL Servers in the RonDB cluster. The reason is that if a table is created by one of the MySQL Servers using the newer RonDB version it isn’t possible for older RonDB versions to use the tables created by the newer versions. The reason for this is that the newer version could potentially be using a new distribution format for tables. Thus it isn’t fully safe to use tables created by a newer version.

It is advisable to avoid creating tables during the upgrade if possible since it could otherwise lead to the tables being temporarily unavailable on MySQL Servers not yet upgraded.

MySQL Servers can be upgraded one at a time since it is a fairly quick change. The normal procedure for upgrade in three steps applies to MySQL Servers as well. First stop, next upgrade software and finally start the new MySQL Server. Since we don’t store any persistent data in MySQL Servers when using the NDB storage engine it is safe to also bootstrap the MySQL Servers using the new version to avoid any issues with upgrades.

To secure an online software upgrade it is important that the MySQL Servers are formed in a scalable group. In Hopsworks this is accomplished by using Consul, all scalable MySQL servers are accessible through the virtual address onlinefs.mysql.service.consul. This means that any connections currently using this MySQL Server will see a MySQL Server that goes away. However when reconnecting, Consul will ensure that they end up on a new MySQL Server that is alive.

Thus the Consul group ensures that accessing RonDB through MySQL is always possible through the upgrade of the MySQL Servers (and also through all other upgrade steps).

Another possible manner to handle this is to use proxy MySQL Server, obviously this creates a new issue when the Proxy server needs to be upgraded.

Upgrade ClusterJ applications#

ClusterJ applications is handled by the application, thus the exact manner of upgrading them is defined by the application. It is fully possible to avoid upgrading the ClusterJ application if the new RonDB releases has no changes to the ClusterJ parts.

The three steps changes a bit for upgrading ClusterJ applications. The stop and later start is the same. However between the start and stop the JAR file of the ClusterJ application must be replaced with a JAR file compiled against the new ClusterJ version.

Upgrade NDB API applications#

The same reasoning applies for NDB API applications as for ClusterJ except that in this case the build is a C++ build against the libraries of the NDB API.

Upgrading from RonDB 21.04 to 22.10#

When performing an upgrade from RonDB 21.04 to RonDB 22.10 it is a strict rule that one needs to upgrade the RonDB management server before the other nodes. There are substantial changes to the configuration in RonDB 22.10, thus it is important that one doesn’t attempt to start any RonDB 22.10 data nodes, API nodes or MySQL Servers using a RonDB 21.04 management server.

RonDB 22.10 uses a different on-disk format compared to RonDB 21.04. This is a result of that RonDB 22.10 supports variable sized columns on disk. Previously a VARCHAR column was stored as a fixed size column. In RonDB a VARCHAR is stored as a variable sized column on disk. This means that the format of on-disk tables in RonDB 22.10 is different from the format of RonDB 21.04.

This leads to the requirement that the data nodes must be started with an initial node restart. The upgrade is still an online operation, but since the data on disk must be rebuilt from scratch it will take longer time than a normal node restart that can get most of the data from disk. An initial node restart gets all data shipped from another node in the same node group.

The initial node restart happens when we start ndbmtd with the --initial flag.

If the RonDB cluster had no tables created using tablespaces this requirement isn’t there.

Upgrading RonDB inside Hopsworks#

Upgrading Hopsworks is documented in the documentation of Hopsworks. This section describes how to upgrade the RonDB parts without upgrading Hopsworks. This upgrade can be done as an online operation while Hopsworks is running.

Upgrading RonDB requires changing the following components, the RonDB management server, the RonDB data nodes, the scalable MySQL service, the MySQL Server in the head node(s) and also the name nodes and data nodes of HopsFS. In newer versions of Hopsworks there is also an OnlineFS component that inserts data into RonDB from Kafka topics.

Upgrading RonDB components#

Upgrading the RonDB components follows the procedure described above. The only special thing is how Hopsworks starts and stops those components and how you replace the RonDB binaries.

Replacing RonDB binaries#

The RonDB binaries is placed in a symlink at the /srv/hops/mysql directory. The new RonDB binaries can be downloaded from repo.hops.works as described in the chapter on installing RonDB. The placement of the RonDB binaries is in the same place for management servers, data nodes and MySQL Servers.

So to replace the binaries simply means to update this symlink.

sudo su - mysql
cd /srv/hops
wget https://repo.hops.works/master/rondb-21.04.9-linux-glibc2.17-x86_64.tar.gz
tar xfz rondb-21.04.9-linux-glibc2.17-x86_64.tar.gz
rm rondb-21.04.9-linux-glibc2.17-x86_64.tar.gz
ln -s rondb-21.04.9-linux-glibc2.17-x86_64 mysql

After ensuring that the upgrade works properly it is a good idea to remove the old installation that is also found in the /srv/hops directory.

Overwriting the symlink won’t affect the execution of the running processes when you are using Linux. Other operating systems works differently, but Hopsworks is mainly intended for use in Linux.

Start and Stop of RonDB components#

Hopsworks uses systemctl scripts to start and stop services. To e.g. start and stop the RonDB management server use the following commands (assuming that you logged into the machine where the service runs with sudo privileges.

sudo service ndb_mgmd start
sudo service ndb_mgmd stop

The name of the RonDB data node service is ndbmtd and the MySQL Server service is called mysqld.

So first replace the RonDB binaries and then stop the service and then start the service again and it will start up using the new binaries.

Upgrading HopsFS#

At first it is important to understand that performing an online upgrade of the RonDB components of HopsFS is possible. But it requires an HA installation of HopsFS which means that we have at least two HopsFS namenodes and datanodes. Stopping the namenode with only one namenode will fail operations and could lead to lost events in Hopsworks. Thus in this setup it is better to use the upgrade mechanisms for Hopsworks. If there are at least two namenodes running the HopsFS client will ensure that the failed operations are retried on the other namenode if one of them are temporarily stopped for an upgrade.

When upgrading the RonDB components of HopsFS we need to replace the JAR file providing the DAL (Data Access Layer). The first step is to download this JAR file. We download this into the /srv/hops/ndb-hops directory. This directory is actually a symlink as well. But we are not replacing all parts of HopsFS, we are only replacing the RonDB components.

One important note here is that the new JAR file has to have the same version number as the Hopsworks distro, in the below example we used the Hopsworks distro 3.2.0.7. It is only the RonDB version that should be upgraded. We will release only such upgraded versions of the DAL JAR file that are required. If a new RonDB version has no changes affecting the DAL one can use the latest RonDB version for that specific DAL version. Thus in this case the 21.04.7 version is the newest since 21.04.8 has no changes that affects the ClusterJ components in the DAL.

sudo su - hdfs
cd /srv/hops/ndb-hops
wget https://repo.hops.works/master/ndb-dal-3.2.0.7-RC0-21.04.7.jar

The next step is to stop the namenode service.

sudo service namenode stop

Now it is time to replace the JAR files used by HopsFS. The JAR file is located in two places, both use symlinks to the downloaded JAR file and both are named ndb-dal.jar.

sudo su - hdfs
cd /srv/hops/ndb-hops
ln -s ndb-dal-3.2.0.7-RC0-21.04.7.jar ndb-dal.jar
cd /srv/hops/hadoop/share/hadoop/common/lib
ln -s /srv/hops/ndb-hops/ndb-dal-3.2.0.7-RC0-21.04.7.jar ndb-dal.jar

Now we are ready to restart the HopsFS namenode using the upgraded RonDB component.

sudo service namenode start

We have now completed the upgrade of one namenode, now execute the same steps for all other namenodes and the upgrade is completed.

Downgrading RonDB#

Downgrade follows the same procedure as an upgrade and the order required is the same. Downgrade from RonDB 22.10 to RonDB 21.04 will as well require using the --initial flag when restarting the data nodes. In addition if one table has been created since the upgrade to RonDB 22.10 it is no longer possible to downgrade to RonDB 21.04. In this case a backup and restore is the only method to downgrade back to RonDB 21.04.

The order of downgrading nodes when downgrading inside 21.04 or 22.10 is the same as for upgrades.