Skip to content

Release Notes RonDB 22.10.0#

RonDB 22.10.0 is based on MySQL NDB Cluster 8.0.31 and RonDB 21.04.9 and RonDB 22.01.2. In addition the following patches from 21.04.10 is included, RONDB-195, RONDB-199 and RONDB-200.

RonDB 21.04 is a Long-Term Support version of RonDB that will be supported at least until 2024.

RonDB 22.10 is a new Long-Term support version and will be maintained at least until 2025.

RonDB 22.10.0 is released as open source SW with binary tarballs for usage in Linux and Mac OS X. It is developed on Linux and Mac OS X and using WSL 2 on Windows (Linux on Windows). The only supported version is currently the Linux/x86 version. The others are currently for development and testing. We plan to soon also support Linux/ARM. Mac OS X is a development platform and will continue to be so.

Description of RonDB#

RonDB is designed to be used in a managed cloud environment where the user only needs to specify the type of the virtual machine used by the various node types. RonDB has the features required to build a fully automated managed RonDB solution.

It is designed for appplications requiring the combination of low latency, high availability, high throughput and scalable storage (LATS).

You can use RonDB in a Serverless version on app.hopsworks.ai. In this case Hopsworks manages the RonDB cluster and you can use it for your machine learning applications. You can use this version for free with certain quotas on the number of Feature Groups (tables) you are allowed to add and quotas on the memory usage. You can get started in a minute with this, no need to setup any database cluster and worry about its configuration, it is all taken care of.

You can use the managed version of RonDB available on hopsworks.ai. This sets up a RonDB cluster in your own AWS, Azure or GCP account using the Hopsworks managed software. This sets up a RonDB cluster provided a few details on the HW resources to use. These details can either be added through a web-based UI or using Terraform. The RonDB cluster is integrated with Hopsworks and can be used for both RonDB applications as well as for Hopsworks applications.

You can use the cloud scripts that will enable you to set up in an easy manner a cluster on AWS, Azure or GCP. This requires no previous knowledge of RonDB, the script only needs a description of the HW resources to use and the rest of the set up is automated.

You can use the open source version and use the binary tarball and set it up yourself.

You can use the open source version and build and set it up yourself.

This is the commands you can use to retrieve the binary tarball:

# Download x86_64 on Linux
wget https://repo.hops.works/master/rondb-22.10.0-linux-glibc2.17-x86_64.tar.gz

Summary of changes in RonDB 22.10.0#

RonDB 22.10.0 is based on MySQL NDB Cluster 8.0.31 and RonDB 21.04.10 as described above. RonDB 22.10.0 features have either been added first in RonDB 22.01 or are released now in RonDB 22.10.

RonDB 21.04.10 adds 31 new features on top of NDB, RonDB 22.01 added an additional 6 features and RonDB 22.10 adds 2 more features.

RonDB 22.10 thus adds 39 new features on top of MySQL NDB Cluster 8.0.31. On top of this it fixes 122 bugs. We have reported more than 30 bugs to Oracle and many of those bugs have been fixed in MySQL NDB Cluster 8.0.31 although not all of them.

New features added in RonDB 21.04:

  1. Automated Thread Configuration improvements and default behaviour

  2. Automated Memory Configuration

  3. Automated CPU spinning

  4. Configurable number of replicas

  5. Improved networking through send thread handling

  6. Integrated benchmarking tools in RonDB binary distribution

  7. Add support for Date data types in primary keys in ClusterJ.

  8. Add support for Longvarchar data types in primary keys in ClusterJ.

  9. 3x improvement of performance in ClusterJ API

  10. Improvements in ClusterJ to avoid single-threaded garbage collection

  11. Changed defaults of configuration variables

  12. Place pid-files in configured location

  13. Improvements of ndb_import

  14. Handling Auto Increment in ndb_import program

  15. Kill -TERM causes graceful stop of data node

  16. Support larger transactions in RonDB

  17. Introduced new configuration variable LowLatency

  18. Improved error handling when no contact with cluster (4009)

  19. Support for Mac OS X on ARM64

  20. Support for Linux on ARM64

  21. Running RonDB on WSL 2 (Linux on Windows)

  22. Make it possible to use IPv4 sockets between ndbmtd and API nodes

  23. Two new ndbinfo tables to check memory usage

  24. Use of realtime prio in NDB API receive threads

  25. ClusterJ supporting setting database when retrieving Session object

  26. Added support for running MTR with multiple versions of mysqld

  27. Allow newer versions from 21.04 series to create tables recognized by older versions at least 21.04.9

  28. Support setLimits for query.deletePersistentAll()

  29. Move log message to debug when connecting to wrong node id using a fake hostname

  30. Docker build files

  31. Ensured that pid file contains PID of data node, not of angel

New features added in RonDB 22.01:

  1. Make query threads the likely scenario

  2. Move Schema Memory to global memory manager

  3. Improved placement of primary replicas.

  4. Removing index statistics mutex as bottleneck in MySQL Server

  5. More flexibility in thread configuration.

  6. Use Query threads also for Locked Reads

New features added in RonDB 22.10:

  1. Support variable sized disk rows

  2. Improve crashlog

The flagship feature of RonDB 22.10 is that we are making disk columns a flagship feature in RonDB. Previously this has been a feature possible to use for expert users. As an example HopsFS have built the capability to to store small files in RonDB in disk columns.

The disk columns previously had a limitation that disk rows always had a fixed size. Thus declaring a VARCHAR(100) using UTFMB4 meant that 400 bytes of space was used for each row independent of what was stored in the columns.

With RonDB 22.10 the disk rows only use the space they require, thus it depends on the data how much storage they use. The implementations use the same data structures used for in-memory rows, this means that we will also be able to support on-line adding of new disk columns.

In the Feature Store applications that often store arrays of numbers and arrays of status variables that are variable in size this leads to a significant saving of storage space. It can easily save a factor of 10x storage space.

The combination of this and the use of disk columns can lead to the ability to increase the storage space for Feature Stores with a factor of 10x while the price is still similar.

The development of modern SSDs and NVMe drives is thus now fully integrated into RonDB 22.10. There has been a major drive in the last years to improve the quality of the code handling disk columns. With RonDB 21.04 and RonDB 22.10 the disk columns have the same quality as in-memory columns. With RonDB 22.10 we also have the same space efficiency of in-memory columns and disk columns and thus with RonDB 22.10 disk columns moves from the expert domain to the normal users of RonDB.

Modern NVMe drives can handle very large loads, main memory is still more capable and will deliver much better latency and throughput, but at a higher cost. Thus with this RonDB 22.10 release the user can decide based on his requirements whether to store the features in main memory columns or in disk columns. Given the rapid development of NVMe drives we expect the use case of disk columns to be increasingly important for RonDB applications.

Using RonDB 22.10 it will now be possible to store petabytes of data in a single RonDB cluster. On top of this even more data can be stored in the HopsFS file system which is also built on top of RonDB and uses HopsFS data nodes to store large files. On top of HopsFS we have Hudi that enables efficient SQL query execution of these many petabytes of data. Thus with RonDB as base platform for its data, Hopsworks enables applications to store huge amounts of data used for machine learning applications, both online applications and offline applications.

Test environment#

RonDB uses four different ways of testing. MTR is a functional test framework built using SQL statements to test RonDB.

The Autotest framework is specifically designed to test RonDB using the NDB API. The Autotest is mainly focused on testing high availability features and performs thousands of restarts using error injection as part of a full test suite run.

Benchmark testing ensures that we maintain the throughput and latency that is unique to RonDB. The benchmark suites used are integrated into the RonDB binary tarball making it very straightforward to run benchmarks for RonDB.

Finally we also test RonDB in the Hopsworks environment where we perform both normal actions as well as many actions to manage the RonDB clusters.

RonDB has a number of MTR tests that are executed as part of the build process to improve the performance of RonDB.

MTR testing#

RonDB has a functional test suite using the MTR (MySQL Test Run) that executes more than 500 RonDB specific test programs. In adition there are thousands of test cases for the MySQL functionality. MTR is executed on both Mac OS X and Linux.

We also have a special mode of MTR testing where we can run with different versions of RonDB in the same cluster to verify our support of online software upgrade.

Autotest#

RonDB is very focused on high availability. This is tested using a test infrastructure we call Autotest. It contains also many hundreds of test variants that takes around 36 hours to execute the full set. One test run with Autotest uses a specific configuration of RonDB. We execute multiple such configurations varying the number of data nodes, the replication factor and the thread and memory setup.

An important part of this testing framework is that it uses error injection. This means that we can test exactly what will happen if we crash in very specific situations, if we run out of memory at specific points in the code and various ways of changing the timing by inserting small sleeps in critical paths of the code.

During one full test run of Autotest, RonDB nodes are restarted thousands of times in all sorts of critical situations.

Autotest currently runs on Linux with a large variety of CPUs, Linux distributions and even on Windows using WSL 2 with Ubuntu.

Benchmark testing#

We test RonDB using the Sysbench test suite, DBT2 (an open source variant of TPC-C), flexAsynch (an internal key-value benchmark), DBT3 (an open source variant of TPC-H) and finally YCSB (Yahoo Cloud Serving Benchmark).

The focus is on testing RonDBs LATS capabilities (low Latency, high Availability, high Throughput and scalable Storage).

Hopsworks testing#

Finally we also execute tests in Hopsworks to ensure that it works with HopsFS, the distributed file system built on top of RonDB, and HSFS, the Feature Store designed on top of RonDB, and together with all other use cases of RonDB in the Hopsworks framework.

New features#

Variable sized disk rows#

Storing columns on disk was introduced into MySQL NDB Cluster already in version 5.1. It has been constantly improved and efforts have been made to make it much more stable and performant. Thus in RonDB 22.10 the quality of disk columns is at par with in-memory columns.

Benchmark experiments available on www.rondb.com show how latency throughput is for in-memory columns and disk columns. The section on Scalable Storage focus on performance and latency of disk columns using the YCSB benchmark.

On mikaelronstrom.blogspot.com there are blogs from October 2020 about performance of large insert loads into tables using disk columns. These experiments shows how RonDB can handle much more than 1 GByte per second of insert loads into the disk columns with sufficient HW to support it.

In RonDB 22.10 we add on top of this performance and stability also the ability to use more compact representation of the data in disk columns. Previously each row had a fixed size, with RonDB 22.10 each disk row is using the same data structure used by in-memory columns. This means that we have support of storing columns of variable size in a variable sized row. The disk rows also support storing dynamic columns. This means that we will also be able to support online add of disk columns.

Improved placement of primary replicas#

The distribution of primary replicas in RonDB 21.04 isn't optimal for the new fragmentation variants. With e.g. 8 fragments per table and 2 nodes we will find that two LDM threads gets two fragments to act as primary for in a four-LDM setup whereas the other two LDM threads gets no primary replicas to handle.

This is handled by a better setup at creation of the table. However to also address handling of inactive nodes we need to also redistribute the fragments at various events.

The redistribution is only allowed if all nodes have upgraded to at least RonDB 22.01 in the cluster. Older versions of RonDB will not redistribute and we need to ensure that all data nodes use the same primary replicas. If not we would cause a multitude of constant deadlocks.

This change improves performance by about 30% for the DBT2 benchmark.

Index stat mutex bottleneck removed#

A major bottleneck in the MySQL Server is the index statistics mutex.

This is acquired 3 times per index lookup to gather index statistics. This becomes a bottleneck when Sysbench OLTP RW reaches around 10000 TPS with around 100 threads. Thus a severe limitation on scalability for the MySQL Server using RonDB. In Sysbench benchmarks this improvement has provided at least 10% higher throughput.

To handle this we ensure that the hot path through the code doesn't need to acquire the global mutex at all. This is solved by using the NDB_SHARE mutex a bit more and making the ref_count variable an atomic variable.

Also needed to handle some global statistics variables. Fixed by adding them on local object and every now and then transferring to the global object.

Query thread improvement#

In MySQL Cluster 8.0.23 query threads was introduced. This meant that query threads could be used for READ COMMITTED queries. In this feature this is extended to also handle the PREPARE phase of LOCKED reads using key-value lookup through LQHKEYREQ.

This means more concurrency and provides a better scalability for applications that rely heavily on locked reads such as the benchmark DBT2.

Improved networking through send thread handling#

RonDB 22.10 has made improvements to the send thread handling making it less aggressive and also automatically activating send delays at high loads.

Move Schema Memory to global memory manager#

This is the final step in a long series of development to move all major memory consumers to use the global memory manager. Now all major memory consumers use the global memory manager from RonDB 22.01 and all later versions.

This change is mostly an internal change that ensures that all Schema Memory objects are using the global memory manager. Already in RonDB 21.04 memory configuration was automatic, so this makes some memory more flexibly available.

Another major improvement in this change is the addition of a malloc-like interface to the global memory manager. This is used in a few places which e.g. means that we can now have any number of partitions per table in a single table independent of the number of LDM threads in the data node.

More flexibility in thread configuration#

This patch serie was introduced mainly to be able to use RonDB to experiment with various thread configurations that typically wasn't supported in NDB. The main change is to enable to use receive threads for all types of thread types.

With these changes it is possible to e.g. run with only a set of receive threads.

The long-term goal of this patch is to find an even better configuration for automatic thread configuration.

Improve crashlog#

  1. Interleave Signal and Jam dumps. Signals are printed NEWEST first, and under each signal the corresponding Jam entries, OLDEST first.

  2. Let printPACKED_SIGNAL detect whether we're in a crashlog dump. If so, print the contained/packed signals NEWEST first and under each signal the corresponding Jam entries, OLDEST first. When not in a crashlog dump, print the contained signals NEWEST first without Jam entries.

  3. Better formatting and messages

    1. Cases with missing/unmatched signals and Jam entries are handled gracefully.

    2. Legend added

    3. Print signal ids in hexadecimal form

    4. Don't print block number in packed signals

  4. JamEvents can have five types

    1. LINE: As before, show the line number in the dump

    2. DATA: Show the data in the dump with a \"d\" prefix to distinguish it from a line number. This type of entry is created by *jam*Data* macros (or the deprecated *jam*Line* macros). The data is silently truncated to 16bit.

    3. EMPTY: As before, do not show in the dump

    4. STARTOFSIG: Used to mark the start of a signal and to save the signal Id

    5. STARTOFPACKEDSIG: Used to mark the start of a packed signal and to save both its signal Id and pack index

  5. Update Jam macros

    1. Deprecate *jam*Line* macros and add *jam*Data* macros in their place

      1. jamBlockData

      2. jamData

      3. jamDataDebug

      4. thrjamData

      5. thrjamDataDebug

    2. Cleanup, add documentation, and add internal prefix to macros only used in other macro definitions

  6. Static asserts to make sure that

    1. EMULATED_JAM_SIZE is valid

    2. JAM_FILE_ID refers to the correct filename. This was previously tested occasionally, run-time in debugging builds. With this change the test is performed always and compile-time. The jamFileNames table and JamEvent::verifyId had to be moved from Emulator.cpp to Emulator.hpp in order to be available at compile-time.

    3. File id and line number fit in the number of bits used to store them

  7. Refactoring, comments etc.

  8. Refactor signaldata

    1. Introduce printHex function and use it to print Uint32 sequences
  9. jamFileNames maintenance

    1. Add test_jamFileNames.sh script to find problems in the jamFileNames[] table

    2. Add a unit test for test_jamFileNames.sh

    3. Correct the problems found

Make query threads the likely scenario#

An optimisation was made in the code such that the patch using query threads is the one the compiler will optimise on. It provides a minor performance improvement.

Bug fixes#

Wrong assert in recv_awake method#

The method recv_awake asserted that it was always called in state FS_SLEEPING, this wasn't correct, so removed this assert.

Enable GCP stop#

Ensure that GCP stop is enabled by default.

Ensure that DBTC tracks long running transactions to print out outliers that cause DBTC to block GCPs.

Added more printouts when GCP stop is close to happening.

Added code to check if DBTC for some reason is making no progress on handling a GCP. Printouts added to enable better handling of this issue.

Put back the inactive transaction timeout to 40 days