Release Notes RonDB 21.04.16#
RonDB 21.04.16 is the sixteenth release of RonDB 21.04.
RonDB 21.04 is based on MySQL NDB Cluster 8.0.23. It is a bug fix release based on RonDB 21.04.15.
RonDB 21.04.16 is released as open source software with binary tarballs for usage in Linux. It is developed on Linux and Mac OS X and using WSL 2 on Windows (Linux on Windows) for automated testing.
RonDB 21.04.16 can be used with both x86_64 and ARM64 architectures although ARM64 is still in beta state.
The RonDB 21.04.16 is tested and verified on both x86_64 and ARM platforms using both Linux and Mac OS X. It is however only released with a binary tarball for x86_64 on Linux.
In the git log there is also a lot of commits relating to the REST API server. This is still under heavy development, thus we don’t mention in the release notes the changes related to this development until the REST API server is released. From RonDB 21.04.14 the REST API is at a quality level where we have released it for production usage.
Also build fixes are not listed in the release notes, but can be found in the git log.
This is the last RonDB 21.04 release to be used in new Hopsworks releases. There might still be bug fix releases to support existing Hopsworks releases. The main focus for new Hopsworks releases is now moved to the RonDB 22.10 release series. From Hopsworks version 3.7 the RonDB 22.10 release series is used in Hopsworks. Older Hopsworks versions are still maintained and could get new RonDB 21.04 versions if needed.
Description of RonDB#
RonDB is designed to be used in a managed environment where the user only needs to specify the type of the virtual machine used by the various node types. RonDB has the features required to build a fully automated managed RonDB solution.
It is designed for appplications requiring the combination of low latency, high availability, high throughput and scalable storage.
You can use RonDB in a serverless version on app.hopsworks.ai. In this case Hopsworks manages the RonDB cluster and you can use it for your machine learning applications. You can use this version for free with certain quotas on the number of Feature Groups (tables) and memory usage. Getting started with this is a matter of a few minutes, since the setup and configuration of the database cluster is already taken care of by Hopsworks.
You can also use the managed version of RonDB available on hopsworks.ai. This sets up a RonDB cluster in your own AWS, Azure or GCP account using the Hopsworks managed software. It creates a RonDB cluster provided a few details on the HW resources to use. These details can either be added through a web-based UI or using Terraform. The RonDB cluster is integrated with Hopsworks and can be used for both RonDB applications as well as for Hopsworks applications.
You can use the cloud scripts that will enable you to set up in an easy manner a cluster on AWS, Azure or GCP. This requires no previous knowledge of RonDB, the script only needs a description of the HW resources to use and the rest of the setup is automated.
You can use the open source version and build and set it up yourself. This is the command you can use to download a binary tarball:
# Download x86_64 on Linux
wget https://repo.hops.works/master/rondb-21.04.16-linux-glibc2.17-x86_64.tar.gz
RonDB 21.04 is a Long Term Support version that will be maintained until at least 2024.
Maintaining 21.04 means mainly fixing critical bugs and minor change requests. It doesn’t involve merging with any future release of MySQL NDB Cluster, this will be handled in newer RonDB releases.
Backports of critical bug fixes from MySQL NDB Cluster will happen when deemed necessary.
Summary of changes in RonDB 21.04.16#
RonDB has 10 bug fixes since RonDB 21.04.14 and 0 new features. In total RonDB 21.04 contains 35 new features on top of MySQL Cluster 8.0.23 and a total of 158 bug fixes.
Test environment#
RonDB uses four different ways of testing. MTR is a functional test framework built using SQL statements to test RonDB. The Autotest framework is specifically designed to test RonDB using the NDB API. The Autotest is mainly focused on testing high availability features and performs thousands of restarts using error injection as part of a full test suite run. Benchmark testing ensures that we maintain the throughput and latency that is unique to RonDB. Finally we also test RonDB in the Hopsworks environment where we perform both normal actions as well as many actions to manage the RonDB clusters.
RonDB has a number of unit tests that are executed as part of the build process to improve the performance of RonDB.
In addition RonDB is tested as part of testing Hopsworks.
MTR testing#
RonDB has a functional test suite using the MTR (MySQL Test Run) that executes more than 500 RonDB specific test programs. In addition there are thousands of test cases for the MySQL functionality. MTR is executed on both Mac OS X and Linux.
We also have a special mode of MTR testing where we can run with different versions of RonDB in the same cluster to verify our support of online software upgrade.
Autotest#
RonDB is highly focused on high availability. This is tested using a test infrastructure we call Autotest. It contains many hundreds of test variants that takes around 36 hours to execute the full set. One test run with Autotest uses a specific configuration of RonDB. We execute multiple such configurations varying the number of data nodes, the replication factor and the thread and memory setup.
An important part of this testing framework is that it uses error injection. This means that we can test exactly what will happen if we crash in very specific situations, if we run out of memory at specific points in the code and various ways of changing the timing by inserting small sleeps in critical paths of the code.
During one full test run of Autotest RonDB nodes are restarted thousands of times in all sorts of critical situations.
Autotest currently runs on Linux with a large variety of CPUs, Linux distributions and even on Windows using WSL 2 with Ubuntu.
Benchmark testing#
We test RonDB using the Sysbench test suite, DBT2 (an open source variant of TPC-C), flexAsynch (an internal key-value benchmark), DBT3 (an open source variant of TPC-H) and finally YCSB (Yahoo Cloud Serving Benchmark).
The focus is on testing RonDBs LATS capabilities (low Latency, high Availability, high Throughput and scalable Storage).
Hopsworks testing#
Finally we also execute tests in Hopsworks to ensure that it works with HopsFS, the distributed file system built on top of RonDB, and HSFS, the Feature Store designed on top of RonDB, and together with all other use cases of RonDB in the Hopsworks framework.
These tests include both functional tests of the Hopsworks framework as well as load testing of HopsFS and Hopsworks.
New features#
No new features were introduced in RonDB 21.04.16.
BUG FIXES#
RONDB-537: Bugfix in setNeighbourNode#
In the setNeighbourNode() function, when removing a new neighbor transporter from the non-neighbor transport list, the prev_trp_id isn’t updated correctly.
RONDB-539: REST API Server. Update dependencies for security fixes#
RONDB-541: REST API Server. Added test for reading default values#
RONDB-496: REST API Server. Added support for BLOB / TEXT.#
RONDB-549: Print tuple to log on data error in ndb_restore#
RONDB-475: Defensive fix of already fixed bug#
RONDB-428: Add --allow-unique-indexes to tests as necessary#
RONDB-552: Make OS overhead configurable#
Two new configuration parameters were introduced. OsStaticOverhead and OsCpuOverhead
These variables are used to calculate OS overhead when calculating automatic memory. When using AutomaticMemoryConfig there are two modes, one is to set TotalMemoryConfig, in this case the user has decided how much the total memory should be and this is used and those parameters are ignored.
If the user has set to TotalMemoryConfig to 0 or simply not set it, then the calculation will be based on the amount of memory in the computer/VM/container it runs in. The default settings in this case is intended for a setup in a VM where the data nodes can use the entire VM except for some small processes used to monitor and change the cluster.
However in some cases the user might need to run also some heavier processes in their VM, in this case it is possible to increase either OsStaticOverhead or OsCpuOverhead or both.
OsCpuOverhead is multiplied by the number of CPUs.
It is necessary to leave memory both for the OS itself and for OS kernel buffers and for any applications that need to coexist with RonDB.
The default setting on OsStaticOverhead is 1400M and the default setting of OsCpuOverhead is 100M. In addition we will always avoid using 1% of the memory size.
Thus by default a node with 16 VCPUs and 128GB of memory will have OS overhead computed 1280M + 1400M + 16 * 100M = 4280M. Thus we will almost 124GB of memory for the data node in this case.
OsStaticOverhead cannot be decreased below 400M and OsCpuOverhead cannot be decreased below 50M.
RONDB-555: Fix check for existing unique index in ndb_restore#
RONDB-430 attempted to check whether any unique indexes exist in the database that could cause trouble during restore and, if so, warn or fail depending on whether --allow-unique-indexes is set. However, instead of checking the database, it warned/failed on every index present in the backup.
This patch fixes the check to actually check the database.
RONDB-594: Fix calculation of number of fragment replica records in DBDIH#
This bug resulted in only 10k table objects were possible to store in RonDB 21.04 in default setup although it was set up for handling 20k table objects.