Release Notes RonDB 21.04.9#
RonDB 21.04.9 is the ninth release of RonDB 21.04.
RonDB 21.04 is based on MySQL NDB Cluster 8.0.23. It is a bug fix release based on RonDB 21.04.8. It adds a few new features that was required since RonDB 21.04.9 is released together with Hopsworks 3.1, a feature store for Machine Learning applications.
RonDB 21.04.9 is released as open source software with binary tarballs for usage in Linux. It is developed on Linux and Mac OS X and using WSL 2 on Windows (Linux on Windows) for automated testing.
RonDB 21.04.9 can be used with both x86_64 and ARM64 architectures although ARM64 is still in beta state.
The RonDB 21.04.9 is tested and verified on both x86_64 and ARM platforms using both Linux and Mac OS X. It is however only released with a binary tarball for x86_64 on Linux.
Description of RonDB#
RonDB is designed to be used in a managed environment where the user only needs to specify the type of the virtual machine used by the various node types. RonDB has the features required to build a fully automated managed RonDB solution.
It is designed for appplications requiring the combination of low latency, high availability, high throughput and scalable storage.
You can use RonDB in a serverless version on app.hopsworks.ai. In this case Hopsworks manages the RonDB cluster and you can use it for your machine learning applications. You can use this version for free with certain quotas on the number of Feature Groups (tables) and memory usage. Getting started with this is a matter of a few minutes, since the setup and configuration of the database cluster is already taken care of by Hopsworks.
You can also use the managed version of RonDB available on hopsworks.ai. This sets up a RonDB cluster in your own AWS, Azure or GCP account using the Hopsworks managed software. This sets up a RonDB cluster provided a few details on the HW resources to use. These details can either be added through a web-based UI or using Terraform. The RonDB cluster is integrated with Hopsworks and can be used for both RonDB applications as well as for Hopsworks applications.
You can use the cloud scripts that will enable you to set up in an easy manner a cluster on AWS, Azure or GCP. This requires no previous knowledge of RonDB, the script only needs a description of the HW resources to use and the rest of the set up is automated.
You can use the open source version and use the binary tarball and set it up yourself.
You can use the open source version and build and set it up yourself. This is the commands you can use:
# Download x86_64 on Linux wget https://repo.hops.works/master/rondb-21.04.9-linux-glibc2.17-x86_64.tar.gz
RonDB 21.04 is a Long Term Support version that will be maintained until at least 2024.
Maintaining 21.04 means mainly fixing critical bugs and minor change requests. It doesn't involve merging with any future release of MySQL NDB Cluster, this will be handled in newer RonDB releases.
Backports of critical bug fixes from MySQL NDB Cluster will happen when deemed necessary.
Summary of changes in RonDB 21.04.9#
RonDB has 15 bug fixes since RonDB 21.04.8 and 9 new features. In total RonDB 21.04 contains 24 new features on top of MySQL Cluster 8.0.23 and a total of 117 bug fixes.
The main reasons for the new features is added functionality to ClusterJ required by OnlineFS which is a part of the Feature Store in Hopsworks 3.1. OnlineFS receives new data from Kafka and uses ClusterJ to inject this data into thousands of databases in RonDB.
One new feature provides a much improved latency and throughput with a very simple change that was found using modern performance analysis tools integrated with the Linux kernel.
A set of features revolved around improvements of our build process for RonDB.
RonDB uses four different ways of testing. MTR is a functional test framework built using SQL statements to test RonDB. The Autotest framework is specifically designed to test RonDB using the NDB API. The Autotest is mainly focused on testing high availability features and performs thousands of restarts using error injection as part of a full test suite run. Benchmark testing ensures that we maintain the throughput and latency that is unique to RonDB. Finally we also test RonDB in the Hopsworks environment where we perform both normal actions as well as many actions to manage the RonDB clusters.
RonDB has a number of unit tests that are executed as part of the build process to improve the performance of RonDB.
In addition RonDB is tested as part of testing Hopsworks.
RonDB has a functional test suite using the MTR (MySQL Test Run) that executes more than 500 RonDB specific test programs. In adition there are thousands of test cases for the MySQL functionality. MTR is executed on both Mac OS X and Linux.
We also have a special mode of MTR testing where we can run with different versions of RonDB in the same cluster to verify our support of online software upgrade.
RonDB is very focused on high availability. This is tested using a test infrastructure we call Autotest. It contains many hundreds of test variants that takes around 36 hours to execute the full set. One test run with Autotest uses a specific configuration of RonDB. We execute multiple such configurations varying the number of data nodes, the replication factor and the thread and memory setup.
An important part of this testing framework is that it uses error injection. This means that we can test exactly what will happen if we crash in very specific situations, if we run out of memory at specific points in the code and various ways of changing the timing by inserting small sleeps in critical paths of the code.
During one full test run of Autotest RonDB nodes are restarted thousands of times in all sorts of critical situations.
Autotest currently runs on Linux with a large variety of CPUs, Linux distributions and even on Windows using WSL 2 with Ubuntu.
We test RonDB using the Sysbench test suite, DBT2 (an open source variant of TPC-C), flexAsynch (an internal key-value benchmark), DBT3 (an open source variant of TPC-H) and finally YCSB (Yahoo Cloud Serving Benchmark).
The focus is on testing RonDBs LATS capabilities (low Latency, high Availability, high Throughput and scalable Storage).
Finally we also execute tests in Hopsworks to ensure that it works with HopsFS, the distributed file system built on top of RonDB, and HSFS, the Feature Store designed on top of RonDB, and together with all other use cases of RonDB in the Hopsworks framework.
These tests include both functional tests of the Hopsworks framework as well as load testing of HopsFS and Hopsworks.
Use of realtime prio in NDB API receive threads#
Experiments show that it removes a lot of variance in benchmarks, decreasing variance by a factor of 3. In a Sysbench Point select benchmark it improved performance by 20-25% while at the same time improving latency by 20%. One could also get the same performance at almost 4x lower latency (0.94 ms round trip time for a PK read compared to 0.25 ms after change).
RONDB-167: ClusterJ supporting setting database when retrieving Session object#
ClusterJ has been limited to handle only one database per cluster connection. This severely limits the usability of ClusterJ in cases where there are many databases such as in a multi-tenant use case for RonDB.
At the same time the common case is to handle only one database per cluster connection. Thus it is important to maintain the performance characteristics for this case.
One new addition to the public ClusterJ API is the addition of a new getSession call with a String object representing the database name to be used by the Session object. Once a Session object has been created it cannot change to another database. A session object can have a cache of DTO objects, this would be fairly useless when used with many different databases. Thus this isn't supported in this implementation. The limitation this brings about is that a transaction is bound to a specific database.
We can cache sessions in RonDB, we have one linked list of cached session objects for the default database. Other databases create a linked list at first use of a database in a SessionFactory object. The limit on the amount of cached Session objects is maintained globally. Currently we simply avoid putting it back on the list if the maximum has been reached. An improvement could be to have a global order of the latest use of a Session object, this hasn't been implemented here.
A Session object has a Db object that represents the Ndb object used by the Session. This Ndb object is bound to a specific database. For simplicity we store the database name and a boolean if the database is the default database. The database name could have been retrieved from the Ndb object as well. This database name in the Db object is used when retrieving an NdbRecord for the table.
ClusterJ handles NdbRecord in an optimised manner that tries to reuse them as much as possible. Previously it created on Ndb object together with an NdbDictionary object to handle NdbRecord creation. Now this dictionary is renamed and used for only the default database. Each new database will create one more Ndb object together with an NdbDictionary object. This object will handle all NdbRecord's for that database. For quick finding of this object we use a ConcurrentHashMap using database name to find this NdbDictionary object.
Previously there was a ConcurrentHashMap for all NdbRecord's, both for tables and for indexes. These used a naming scheme that was tableName only or tableName+indexName.
This map is kept, but now the naming scheme is either databaseName+tableName or databaseName+tableName+indexName.
Thus more entries are likely to be in the hash map, but it should not affect performance very much.
These maps are used to iterate over when unload schemas and when removing cached tables.
With multiple databases in a cluster connection the LRU list handling becomes more important to ensure that hot databases are more often cached than cold databases. Implemented a specific LRU list of Session objects in addition to a queue per database.
Added a few more test cases for multiple databases in ClusterJ. Added also more tests to handle caching of dynamic objects and caching of session objects.
Added support for running MTR with multiple versions of mysqld#
RONDB-169: Allow newer versions from 21.04 series to create tables recognized by older versions at least 21.04.9#
RONDB-171: Support setLimits for query.deletePersistentAll()#
This feature adds support for limit when using the deletePersistentAll method in ClusterJ.
deletePersistentAll on a Query object gives the possibility to delete all rows in a range or through a search condition. However a range could contain millions of rows, thus a limit is a good idea to avoid huge transactions.
RONDB-174: Move log message to debug when connecting to wrong node id using a fake hostname#
RONDB-184: Docker build changes#
New docker build files to build experimental ARM64 builds and fixes to the x86 Docker build files.
Fixes of the Jenkins build files.
Added Dockerfile with base image ubuntu:22.04 for experimental ARM64 builds
Using caching of downloads and builds within Dockerfiles
Bumped sysbench and dbt2 versions to accommodate for ARM64 building (build_bench.sh)
Dynamic naming of tarballs depending on building architecture (create_rondb_tarball.sh)
Placed docker-build.sh logic into Dockerfiles
Formatting of scripts
Removed a few printouts during restart that generated loads of printouts with little info#
Update versions for Sysbench and DBT2 in release script#
Updated to ensure that Sysbench and DBT2 works also on ARM64 platforms.
RONDB-158: Fix number of loops in NdbSpin#
A miscalculation in the initialisation of the NdbSpin logic made the amount of time spent in NdbSpin too low. This caused that the spinning thread was too active in using CPU resources and had a negative performance impact.
Backport of WL15005 from MySQL 8.0.30#
A new benchmark framework was added to MTR in MySQL 8.0.30 that makes it easy to run benchmarks on RonDB. This framework was backported from RonDB 22.10 to RonDB 21.04.
RONDB-160: Mix of Query threads and non-Query threads caused crash#
Using only 2 CPUs and automatic memory configuration means that there are no query threads set up. If such a RonDB data node is mixed in a cluster with a node with e.g. 4 CPUs that has 1 query thread will cause a crash in the node with 2 CPUs when issuing some primary key lookups.
RONDB-190: Crash on corrupt row data structure#
In very rare situations an INSERT of a new row collided with reading the same row. When this INSERT reorganised the page to find space in the page, it did so without protecting it with an exclusive lock. This led to the reader reading garbage that led to a crash. Fixed by ensuring that the INSERT retrieves an exclusive lock on the table fragment before reorganising the page to find space in it.
Backport of BUG32871806: Remove libcrypt and libatomic from CMake files#
Fixes for testNodeRestart -n ChangeNumLDMsNR test case#
RONDB-180: Fix cluster reconnect for unload and setpartition operations#
RONDB-179: unlink of pid file bug#
The deletion of the pid file used a pointer to an area that was already free'd, this led to a deletion of some random thing, most of the time unlink failed since the file name was simply an empty string. However in rare cases with usage of systemd one could delete some real file.
RONDB-182: Fixed NPE in relase after unloading table schema#
RONDB-183: Drop instance cache when session is closed#
When closing a session, it is important to drop the cache of dynamic objects as well.
RONDB-186: Fix issues with early errors causing crash#
A bug in ClusterJ led to sending corrupt signals to data node. This led to a crash. This fix ensures that they will not cause a crash, but only a failed transaction. Added test cases for this errors as well.
RONDB-186: ClusterJ could send corrupt signals data nodes#
In rare cases the NdbRecordImpl object was released before the Session object was completely released. This led to sending signals with corrupt data. Delayed the release of the NdbRecordImpl sufficiently to avoid this issue.
RONDB-186: Fixes to dynamic object cache in ClusterJ#
RONDB-187: Fixed bug in setting not null columns to null#
There was a bug in Clusterj such that when a not null string column is set to null then the value is set to empty string. Setting a NOT NULL column to NULL should result in a failed transaction. Added new test cases for this as well.