Skip to content

RonDB compared to InnoDB Cluster#

One place where one could consider using RonDB is to replace use cases where InnoDB or InnoDB Cluster is currently used. There are cases when this is a good idea to do and there are cases when such a switch is not a good idea.

In this chapter we will try to give some descriptions of the differences between InnoDB and RonDB to assist in understanding if a switch to RonDB makes sense.

NDB is the storage engine that RonDB uses in MySQL. This chapter is a comparison of the NDB storage engine in RonDB and the InnoDB storage engine in MySQL. We will also compare the InnoDB Cluster with the clustering features of RonDB. The MySQL server code in RonDB is mostly the same as used by InnoDB. There are very few changes of the MySQL Server code in RonDB, mostly changes have been made to fix bugs related to Cluster replication, NDB stored grants and other areas of the MySQL Server.

Differences between RonDB and InnoDB#

The differences can be categorized as performance differences, feature differences, availability differences, differences in how consistency is handled and there are some minor differences in how features are implemented.

Performance differences#

The first thing to consider is whether the performance will be improved or if it will be worse using RonDB. The answer is very much "it depends".

First of all it is important that we are here considering whether to use NDB or InnoDB as a storage engine. The most important difference here is that InnoDB is a local storage engine, thus each call from the MySQL Server code into the storage engine is a local call and will never leave the MySQL Server process. Each call will in most cases be evaluated in the same thread as the MySQL Server process.

This has advantages in that very few context switches are required to execute a short query. It has disadvantages in that it makes it difficult to make use of several threads and even more so using several computers to execute a query.

When calls are made into the NDB storage engine it will frequently have to traverse a transporter (a socket) and return again as part of the storage engine call. To handle this the NDB storage engine has done a lot of optimisations to perform those calls in a batching mode. But many queries are still going to involve frequent communication with the data node.

There are a few dimensions where RonDB and InnoDB differs. The first as mentioned is that RonDB is a distributed storage engine whereas InnoDB is a local storage engine. The second dimension is that RonDB stores most of the data in memory and InnoDB stores things on disk using a page cache. The third dimension that RonDB and InnoDB differs in is the threading model. RonDB consists of multiple data nodes and each data node consists of multiple threads that need to communicate. InnoDB makes accesses to data from the MySQL Server thread and uses various mutexes, condition variables and other ways of handling concurrent threads accessing the same data.

So what does this mean for performance? It means that the performance story is mixed. There are many cases where InnoDB is faster when access is made through the MySQL Server, but there are also cases where RonDB is faster and there are lots of cases where performance is similar.

As an example from Sysbench InnoDB has advantages in that there is no context switches to handle during execution of a simple SQL query. At the same time a query that performs an ordered index scan searching for 100 rows (a common query in Sysbench) can automatically parallelise this search in NDB. Since InnoDB uses lots of mutexes to protect shared data structures it will suffer contention for write-intensive application. RonDB divides data into separate threads and can thus scale very well for write-intensive applications. Each data node can handle millions of updates per second.

Performance has two components, latency and throughput. Latency is affected by both the distributed nature, the threading model and whether data is resident in memory or not. If InnoDB has the requested data in memory InnoDB will almost always return data faster than RonDB unless it is a complex query that contains parts that can be parallelised. As an example in the Sysbench benchmark there is a number of queries that read 100 rows from an index. In InnoDB these 100 rows will be scanned using a secondary index in InnoDB from one thread. In RonDB this access will be automatically parallelised to access all partitions of the table. This means that those queries will return their data to the MySQL Server quite quickly. If these queries contain a filter that is applied to the rows before returning them as results the RonDB have an even bigger advantages since these condition filters will be executed in parallel on all partitions. Thus RonDB have a lot of automatic parallelisation of the queries since data is spread on many different partitions.

InnoDB will perform its request in one thread and will stay in this thread, this means no extra delays to perform thread switching and sending over sockets or other communication media. It means of course that no automatic parallelisation will occur.

The most similar comparison of RonDB and InnoDB is when RonDB is using 1 replica. There should only be 1 data node to ensure that all communication is with the local RonDB data node. In this case RonDB will not perform any distributed message that could potentially be limited by the bandwidth of e.g. Gigabit Ethernet or some other media used. RonDB will still use a more complex threading model, but in this setup the latency of RonDB is as small as is possible.

The above model isn’t practical for RonDB, in order to make a more realistic comparison we should use two data nodes on different machines and there should be a MySQL Server on each of those machines colocated with the data node.

If we use this setup and ensure that all tables are created with the ndb-read-backup configuration variable set and the ndb-data-node-neighbour set to point to the node id of the local data node of the MySQL Server (this will happen automatically if the data node and the MySQL Server is in the same OS such that they can both access the hostname of the data node). In this case all normal SELECT statements will be made locally. Only updates and SELECTs using a special LOCK mode (e.g. LOCK IN SHARED MODE) will be distributed.

The above manner to setup RonDB is the proper one to use if the aim is to replace InnoDB with RonDB. With this configuration setup we have a good chance that performance of the application will be on par with the InnoDB application and sometimes better. With RonDB the replication is automatically in place and all replicas are always kept in synch with each other.

So far we have mainly discussed the latency. Throughput performance is a bit more complex to analyse. When using many concurrent threads the performance of InnoDB scales fairly linearly to start with, after adding more and more threads InnoDB will start to get diminishing return from adding more threads and at some point a maximum performance is achieved. The throughput in InnoDB is dependent on whether the queries are SELECT queries or if they are updating queries. InnoDB has good scalability for SELECT queries, but for updating queries the scalability is a bit more limited.

RonDB scales almost perfectly linear in number of concurrent threads. After reaching the number of threads equal to the number of CPU cores the throughput increases with a different steps per thread but still fairly linear. After using more threads than the number of CPUs available the performance continues to increase but slower. For InnoDB the maximum performance is usually reached around 100 threads. For RonDB one can usually scale to around 200-300 threads and with more cluster connections it might scale to 500-600 threads (these numbers reflect common benchmarks used, could be different for other applications).

Interestingly the latency of requests goes down with a small number of threads. The reason is that one thread will start up a number of threads, the other thread might therefore get access to those threads without having to wake them up and this might speed up things. One can improve latency in RonDB by using a special configuration parameter that makes threads in the data node spin for a while before going to sleep.

RonDB scalability is the same for SELECT queries and updating queries. This is an important benefit for RonDB that it has excellent scalability for writes as well as for SELECTs. In addition several MySQL Servers can access the data simultaneously such that a cluster could handle at least tens of millions of SQL queries per second. With managed RonDB one can even scale both the number of MySQL Server and the number of CPUs these MySQL Servers use.

The main reason to choose RonDB would normally be high availability coupled with a simpler consistency model for a cluster setup and predictable latency.

Parallelising Query Execution#

One important thing that makes the comparison between InnoDB and RonDB even harder to describe is that RonDB sometimes will parallelise query execution in the data nodes. E.g. when one makes a SELECT with a condition the condition will be pushed down to the NDB data node threads. If the RonDB data node uses e.g. 8 LDM threads in a two-node setup this means that all 8 LDM threads in both nodes can scan the table in parallel and perform the condition evaluation.

RonDB can parallelise join queries automatically in the same fashion. If the first table have a condition to evaluate this can be evaluated in parallel in all eight threads. Next each row that is found in first table is sent for read of e.g. the primary key in the second table. In this case the join is to a great extent parallelised for execution in the data nodes.

The execution of resulting rows in the MySQL Server is still performed in only one thread, but this can still result in considerable speedups of query execution since all fetching of data is parallelised. The best case for RonDB is when filters are pushed to the data nodes. If only a small percentage of the rows are sent back to the MySQL server and we are performing a join operation, the query will be heavily parallelised. If most rows need to be transported back to the single MySQL server thread there will still be a parallelisation of fetching rows, but no parallelisation of filter execution.

The MySQL Server can decide to push down only a subset of the joins and several subsets of the join query. More on pushdown of joins into the data nodes in a later chapter.

A number of users have made good use of this feature and combined with the Read Backup replica feature and the Fully Replicated table feature it can make some queries go magnitudes faster.

All query execution in InnoDB is currently serialised in one thread. Thus there is no comparative analysis for InnoDB and parallelisation.

Availability differences#

The most important difference between RonDB and InnoDB is the availability story. RonDB was designed for subsecond failover and the ability to perform any management changes online. This is the most common reason for choosing RonDB instead of InnoDB. It’s not that common to choose RonDB purely from a performance point of view for SQL applications. The story is different when comparing an NDB API application towards an SQL application. If the user will use a combination of NDB API applications and SQL applications the performance story is quite different.

RonDB has also made significant advances in making high throughput possible for developers used to REST calls. The RonDB REST server can handle batched reads of rows, it will also be able to support many more things as development continues.

One of the advantages of RonDB is that it supports synchronous replication out of the box, it supports multi-master replication inside the cluster. It supports replication between clusters where all clusters can update the data and one uses conflict detection and resolution to handle any conflicting updates. Thus when replicating between clusters, updates can occur in all clusters at the same time and conflict detection will decide what update that will survive.

These are all unique features of RonDB.

Capability differences#

As a storage engine RonDB has more limited capabilities compared to InnoDB. New features in the MySQL Server are usually designed for InnoDB first and then implemented in NDB. Thus some features of InnoDB doesn’t exist in RonDB or are not as well integrated into RonDB yet.

RonDB doesn’t support fulltext indexes. It does support the GIS feature, it has no special GIS index support. JSON support exists in RonDB as well, RonDB can index stored generated columns, it cannot index virtual generated columns.

RonDB support up to 512 columns with a maximum row size of 14 kBytes, RonDB don’t support indexes on prefixes of BLOBs and RonDB doesn’t support savepoints.

When analysing whether RonDB can replace InnoDB in an application one has to check if RonDB has the necessary features to support the application.

At the same time RonDB is quickly developing a set of new features that are unique to RonDB and doesn’t exist in InnoDB. Examples of those are an advanced Kubernetes operator supporting Autoscaling both MySQL Servers and RonDB data nodes. The RonDB REST server have numerous new features such as efficient parallel execution of SQL queries with aggregate calculations. This is intended to support storing time windows of data in RonDB. So coupled with this RonDB also will support Time-To-Live feature (TTL) where rows lives for a specified time.

Consistency Model differences#

All database engines have different ways of supporting concurrent access to data. A few database engines such as Oracle, InnoDB use a model where one can query the data as if the time has stopped. This means that the query results is consistent, but the result is as old as the query is. Thus if a query takes one minute to execute, all the data used in calculating the result was up to one minute old when the query completed.

Other database engines such as IBM DB2, Microsoft SQL Server (can support both models) and RonDB use a model where you read the latest committed data instead.

In this model the query results are always based on the latest data, but at the same time the query result doesn’t show a consistent model of the data at any specific point in time.

When running large and complex queries it isn’t really possible to get both an up-to-date as well as a consistent view of the data. This is only if possible if all updates to the database are stopped.

This is another difference between InnoDB and RonDB. RonDB and InnoDB both use row locks to implement concurrency control, but InnoDB also keeps old records around to ensure that all currently running transactions can acquire an old but consistent view of the data. This increases the memory requirements as well when there are lots of updates in parallel with long-running queries.

Given that RonDB is targeted for real-time applications with high requirements on write scalability we have opted at the moment to not support old but consistent views in RonDB (called Repeatable Read in MySQL terminology). To support this is possible in RonDB but would require overhead in each row of the table as well as keeping rows around for longer and increase processing overhead. Thus less memory would be used to store application data to support repeatable read.

InnoDB have the ability to lock ranges which isn’t possible in RonDB, RonDB only supports row locks and no range locks. The reason is that RonDB supports transparent sharding that makes it very hard to lock a range since the range is distributed over many servers.

For the most part these differences is mostly a philosophical difference. Some applications will benefit from the old but consistent view and some will benefit more from reading the latest committed rows and for some applications it doesn’t really matter.

We can use a metaphore to explain the difference. Assume that we have a train in a station where we need to count some characteristic of the train. In the consistent view we freeze time and perform the analysis on the train as it was at the we started the analysis. When the query is done and we plan to take some action based on the analysis the train might already have left the station and now have different characteristics.

The model where one reads the latest data will not freeze time, it will start the analysis while the train is still in the station and will continue the analysis when the train is moving and will thus arrive at an analysis which is more up-to-date, but still none of the variants will deliver a perfect view of the world. The problem is simply that the world is moving when we are analysing it. The question is whether we want to analyse the most recent or an old historical view.

Foreign Key differences#

Both InnoDB and RonDB support foreign keys. The RonDB foreign keys were designed to be as close to InnoDB foreign keys as possible, it was not possible to make them exactly the same since RonDB is a distributed engine.

Differences in Online Change#

RonDB is designed for the highest possible availability. Thus you can add and drop indexes while data can be read and written. Newer versions of InnoDB support writes while adding an index. RonDB can add a column with a quick online operation.

Another major difference is that RonDB development focus quite a lot on supporting online software upgrades and online software downgrade. This is necessary to support the demanding telecom database requirements and now online machine learning applications.

RonDB supports online repartitioning of a table, InnoDB cannot handle writes during such a change.

Challenges in replacing InnoDB#

As can be seen from the above description it can sometimes be tough to move an application from InnoDB to RonDB. This is true for any move from one database engine to another.

The easiest is always to write your application against RonDB at once, but it is possible to move it from InnoDB and from many other database engines. It is though some work involved in moving towards a new database engine.

A successful example of such a move is in supporting the OpenStack database. The database in OpenStack supports a number of parts of the OpenStack architecture such as Nova, Neutron and so forth. In the Oracle OpenStack solution for Linux and Solaris it is possible to replace the InnoDB with a high availability option using MySQL Cluster (RonDB is a fork of MySQL Cluster).

RonDB and InnoDB Cluster#

At the end of 2016 MySQL introduced a new clustering option based on the new group replication feature. We will consider here the differences between RonDB and InnoDB Cluster.

Comparison to Galera Cluster#

The comparison in this chapter will to a great extent carry over to any other clustering solutions, such as Galera Cluster, provided in the MySQL community.

Group Replication#

The traditional MySQL replication is an asynchronous replication. The transaction is committed on the primary first, the transaction is logged in the replication log and the reply to the requester is sent. At this time the replicas can retrieve the log and apply it. Traditional MySQL replication is based on a pull model where the participants pull the changes in when they are ready for it. Thus in a crash situation where the primary fails, transactions will be lost since there will always be a set of transactions that wasn’t pulled in by any replica yet.

The group replication is based on push replication instead. It uses a PAXOS protocol to ensure that the participants arrive at the same commit order. Thus there is a huge difference at commit time in traditional MySQL replication and in MySQL group replication. In MySQL group replication the committer has to wait until a majority of the participants in the group has received the transaction and a couple of rounds to ensure that all nodes comes to the same commit decision. Normally a group consists of 3 participants, a majority of the 3 participants is 2 and the committer is one of those.

It will normally wait for a response from at least one other participant before the commit is completed. There is more than one phase involved in the PAXOS protocol. Normally three phases are needed to come to a conclusion of the transaction protocol.

This means that MySQL group replication can be used to build a high availability solution.

Both InnoDB Cluster using group replication and RonDB is designed for local area network installations. Both of them can be used also in wide area networks, but it will have an impact on performance.

Both RonDB and InnoDB Cluster can be combined with asynchronous replication between data centers. RonDB can support active-active functionality in this case and handle conflict detection in an advanced manner. This is not supported with group replication.

Differences between clustering solutions#

One difference is that RonDB is a tightly integrated clustering solutions whereas InnoDB Cluster is a loosely integrated clustering solution. What this means is that RonDB was built as a clustering solution from ground up. It doesn’t have a non-clustering mode.

InnoDB Cluster is built from a set of MySQL Servers that communicate at commit time. Another important difference is that RonDB was designed for predictable response time.

This leads to some differences. In RonDB the ordering of transactions are based on row locks that are taken in the prepare phase. Thus there is more communication messages in an RonDB transaction, at the same time communication is tightly integrated in RonDB with several levels of piggybacking of communication happening.

In InnoDB Cluster the transaction is executed in its entirety inside one MySQL Server before commit time. This means that there are no distributed transactions possible.

Thus RonDB can execute cross-shard transactions, it also supports cross-shard joins. InnoDB Cluster cannot execute cross-shard transactions and cannot execute cross-shard joins.

InnoDB Cluster doesn’t scale for writes beyond what one MySQL Server can handle. MySQL Cluster can scale to 48 data nodes and 200 MySQL Servers simultaneously changing the data.

InnoDB Cluster can scale up to 9 replicas for read scaling. RonDB can scale to 48 replicas using fully replicated tables accessed from 200 MySQL Servers.

As mentioned in the previous chapter RonDB has a lot more capabilities for online changes compared to InnoDB and this is true also when comparing RonDB to InnoDB Cluster.

Architecture of InnoDB Cluster#

InnoDB Cluster requires the use of the MySQL Router to acces the MySQL Servers. For read queries it can be sent to any MySQL Server, both the primary and the replica ones. For writes it must be sent to the primary MySQL Server.

All write transactions have to be routed to the Primary server. Read transactions can go to either to one of the participant servers or one of the read replicas. It can go to the primary server if it needs to be sure to read the latest view of the data. In order to handle this the architecture needs the MySQL Router to decide which MySQL Server to route a transaction to.

The MySQL Router is an important tool to ensure that InnoDB Cluster applications can scale well. It requires that the application distinguish read transactions from write transactions and it means that an application can only be guaranteed to see its own updates if it uses the same primary MySQL Server all the time. This is a solution that mainly is designed for applications that does a lot more reads than writes. The design is mainly targeted for web applications where there is a background activity to update the database in special updating transactions and most web requests are pure read-only transactions and for those it is mostly ok to see an old view of the data. However e-commerce applications, telecom applications, financial applications will benefit from the RonDB model where all MySQL Servers can see the updates made by a previous transaction and all MySQL Servers always see the most current view of the data.

If it is necessary to scale the application to more than one group replication set sharding is the only manner. This means that all transactions have to execute in its entirety in one group replication set. Thus the application needs to split up data in some dimension. The current GA solution for InnoDB Cluster has no support for sharding, this has to be implemented by the users of InnoDB Cluster.

Architecture of RonDB in comparison#

Clustering based on RonDB is a lot easier to setup and handle. One sets up a number of MySQL Servers and a number of RonDB data nodes. Not only is it easier to setup a cluster using RonDB, the latency is lower compared to InnoDB Cluster.

In a single node setup InnoDB had the advantage of being able to execute the query directly in the MySQL Server. With MySQL InnoDB Cluster two more delays are introduced. The first one is the introduction of the MySQL Router. This introduces one more network jump before reaching the MySQL Server. The second is group replication that gets involved in updating transactions. For updating transactions the latency is longer since 3 replicas have to be involved in a transaction protocol with three phases. RonDB has the same latency as it had when compared to a single node InnoDB. Given that RonDB was close to InnoDB already in the single node case, it is obvious that RonDB have a clear advantage in both throughput and latency when comparing InnoDB Cluster and RonDB.

The throughput advantage comes from the fact that all MySQL Servers can be used for both reads and writes. This means that in write intensive workloads the throughput scales with more MySQL Servers added to the cluster. For InnoDB Cluster the scalability is limited to what one MySQL Server can handle since all writes will go through one primary MySQL Server.

There is no need to use a MySQL Router for RonDB although it is possible. In the case of using Java, the JDBC driver for MySQL supports round-robin selection of a MySQL Server from the client without using any router. For MySQL connectors that don’t support round-robin routing a MySQL Router is still required to distribute the load amongst the MySQL Servers in the cluster.

RonDB often uses Consul to load balance connections to the MySQL Servers.

One important difference is that with RonDB all MySQL Servers see the same data, thus you can send the query to any MySQL Server connected to the cluster, there can be hundreds of those MySQL Servers. Each MySQL Server can read and update data in the entire cluster.

With InnoDB Cluster you must connect to the primary MySQL Server in one group to update data, and you can only read and update the data in this group. With a sharded solution the application will have to handle all queries against all shards whereas in RonDB all data in all shards (node groups) are available for querying. It is possible to perform both queries and updating transactions that span multiple shards in RonDB.

In order to read data in InnoDB Cluster you can connect to any MySQL Server in the group. If you connect to a replica MySQL Server you might see an older version of the data. The commit phase only ships the logs to all other MySQL Servers in the group, the logs have not been executed at commit time. Replica MySQL Servers will show data up to the point the replication logs have been executed.

In order to do cross-shard queries, the application will have to do the work of sending the queries and collecting the data together.

The InnoDB Cluster solution is aimed at building highly available solutions with read scaling.

RonDB is designed to build scalable applications for both read and write at very high availability and is targeting applications that require predictable response times.

The fact that any MySQL Server can be used for both updates and reads makes it easier to develop a scalable application using RonDB whereas InnoDB Cluster have the advantages coming from the fact that InnoDB storage engine is supporting more features.

Differences in query execution#

InnoDB will execute each query in a single thread in one MySQL Server.

RonDB will control execution of the query from a single thread in one MySQL Server. RonDB can push down query condition evaluation to data nodes. These are evaluated in parallel if scans are sent to multiple partitions. In addition RonDB can push down joins to the RonDB data nodes. Thus RonDB can achieve substantial parallelism in the execution of a single query.

Differences in recovery#

Since InnoDB Cluster is a set of MySQL Server that have been configured to work together it means that when they restart they first restart locally, then they take contact with the other servers and start the distributed recovery. In RonDB the restart of a node is tightly integrated, the node becomes part of the cluster early on and must respect the heartbeat protocol all through the restart.

As an example in RonDB already in the first phase of cluster recovery we will decide which nodes are capable to restore to the recovery point for the cluster. Those nodes that cannot do that will have to wait until the first set of nodes have completed their recovery. After they have completed the cluster recovery the remaining nodes will startup using the live cluster nodes to get the node up to the most recent version of all data.

There is a difference in how synchronisation of a starting node happens. In both clustering solutions a local recovery is the first step. This step recovers the existing transactions the local node saw before it stopped. The synchronisation phase differs significantly. In RonDB the starting node start participating in the distributed transactions at a synchronisation point for each table partition. Then one row at a time (with a certain level of parallelism) is synchronised where each node is using a live node to synch up. The live node knows if the row needs to be sent to the starting node based on a commit timestamp in the row. Thus the algorithm is certain to progress towards completion even if the system executes at a high update rate and even if the database is large.

The InnoDB Cluster instead relies on that the starting node plays catch up with the live node(s). In order to catch up the starting node must apply the logs faster than new ones are generated. It needs to have sufficient space to store all the new logs generated while applying the log from the live nodes. There is no guarantee that the algorithm will be able to complete when a high update rate happens while recovering. The biggest challenge is to bring up a new node from scratch with a large database that have heavy update activity. RonDB is designed to be recoverable even when millions of updates per second are occuring while recovering data nodes.

When the new node have come close enough to the live nodes it will be declared as up, at this point it can do updates. The risk that a MySQL Server in InnoDB Cluster isn’t recoverable increases when the data set is large and when the update rate is high. For web applications the recovery in InnoDB Cluster will work just fine. But for large applications with large data sets and heavy writing the availability is much better with RonDB given that its recovery algorithm guarantees success.

In the Ph.D thesis by Mikael Ronstrőm these algorithms were both analysed for the telecom data server market. The RonDB solution was selected since it has a certain progress, there is no reliance on that the starting node can catch up with the live node and there is no moment where all commits have to stop while the new node enters again. The RonDB algorithm is certain to complete which is important to reach the highest availability level.

Since RonDB is a tightly integrated cluster it is designed for automatic recovery. The data nodes will start recovery automatically after a failure and if the machine stops it is desirable to ensure that the data nodes starts up again from the OS startup scripts. InnoDB Cluster is a loosely integrated cluster and thus requires either manual intervention or that one has written scripts to automate recovery. One can say that they are designed with slightly different design philosophies.

Differences in handling node failures#

The next difference is how the cluster decides which nodes are to survive. In the RonDB solution this is based on a heartbeat protocol. This heartbeat protocol contains all data nodes in the cluster. In RonDB the decision which nodes will survive is based on all node groups in the cluster, not just one.

A surviving set of nodes in RonDB must have at least one node in each node group. If it contains all nodes of at least one node group there is no risk of network partitioning and it will survive even if a majority of the nodes are down.

If there is a risk of network partitioning where no node group have all nodes surviving then if a majority of the previously alive nodes are still alive the cluster will survive. If there is a 50/50-situation an arbitrator is used to decide whether a cluster will survive or not.

In the trivial case of 2 data nodes in 1 node group and one of them fails, the surviving node will be able to continue if it can access the arbitrator and the arbitrator accepts its request to form a new cluster. The arbitrator is any of the RonDB management servers or an API node, if the arbitrator fails, the cluster will select a new one immediately. Before the crash the nodes have agreed upon an arbitrator, at crash time all surviving node partitions will contact the same arbitrator to ask for permission to go on. Only one node partition will be allowed to proceed to avoid network partitioning.

In InnoDB Cluster the decision is part of the PAXOS protocol which means that it is a majority decision, thus in case of only 2 nodes being alive when a node fails a decision cannot be made and thus both nodes will enter read-only mode until the stopped node come back or until manual intervention adds a new node to the group. Thus RonDB needs 2 replicas for highly available operation and InnoDB Cluster requires 3 replicas for highly available operation.

The decision whether the surviving nodes will survive is based on the previously alive nodes in RonDB. Each node failure is a transaction in RonDB. Thus all nodes will see the failures in the same order. When a set of nodes fails we will decide if a majority of the previously alive nodes are still alive. This means that if we start with 4 replicas we will survive 2 nodes failing at the same time, after that we still have 2 nodes up and in this case we can still survive that one of those nodes fail. This means that RonDB can start with 3 replicas and reach 1 remaining replica and still continue to service updates. This comes from the benefit that nodes in RonDB fails as transactions.

By separating the decision as to which node will survive from the nodes that contain the actual data RonDB can run just fine with 2 replicas and will continue to service updates even in the presence of only 1 replica. InnoDB Cluster requires three replicas to handle one node failure. It cannot continue to service updates when 2 nodes have failed out of three since it cannot form a majority in that case.

The difference is that RonDB tries to maintain the cluster to be available for updates all the time whereas for InnoDB Cluster it is ok to enter into read-only mode for some time and involve manual intervention to decide which parts that should survive. Here it is pretty clear that RonDB is designed for write-intensive applications that continues to operate in most error conditions. InnoDB Cluster is designed for highly available applications that are read-intensive and where it is ok to temporarily not be able to update the data.

Sharding differences#

The next difference is about sharding. RonDB runs distributed transactions over all shards (== node groups) and it can run both updating transactions as well as read queries over multiple shards. This is not possible in InnoDB Cluster.

Partitioning among the shards differs, partitioning of data in RonDB is automatic using a hashing solution in RonDB. For RonDB sharding is transparent to the application although one can definitely get much better performance by using various hints as to where transactions are started and ensuring that the partition key is part of the index scans we perform (Partition Pruned Index Scan, will be described in more detail later).

For InnoDB Cluster sharding has to be solved by the application itself.

Active-Active differences#

Next the InnoDB Cluster is mainly usable with a single updating primary at a time. This means that the clients must divide between read transactions and updating transactions. Read-only transactions can run on any MySQL Server in the group whereas updating transactions can only go to the primary MySQL Server. The group replication does support multiple primaries, but this makes foreign key support no longer work fully (foreign key checks works, but foreign key actions are not propagated up to the replication layer). In practice a single master is the normal use case for InnoDB Cluster.

Group replication handles active-active situations by using a sort of token ring solution. A node can update while holding the token. After releasing the token other nodes can update and we will have to wait with updates until we get the token again. In this manner conflicts are avoided by a sort of timesharing mode. The token can be held for several transactions, each time a server gets the token he gets a range of commits that he can do. Transactions can execute in parallel in all nodes but they must commit in order and if any conflicts occur the transaction that has had updates to its read set or write set must be updated. Group replication uses an optimistic concurrency control protocol whereas RonDB uses a pessimistic concurrency control. Optimistic protocols often have advantages when concurrency is limited, but when conflict probability is high they will get frequent aborts whereas pessimistic protocols will handle conflicts well.

This is one more reason to stay away from multiple masters with InnoDB Cluster, the optimistic concurrency protocol makes applications harder to develop.

Design of load balancers for RonDB is easy. One can simply pick any MySQL Server for any transaction. All MySQL Servers (even if there are hundreds of them) can run equally well each transaction.

RonDB supports adding new node groups (== shards) online and reorganising the data to start using the new node groups as updating transactions continue to be serviced.

RonDB supports combining tables that are sharded and spread over many node groups with tables that are fully replicated and existing on all nodes in the cluster. This makes it possible to design applications that have both distributed parts and parts that are more centralised. Thus RonDB can handle a flexible application design.

Detailed differences#

If one node fails in InnoDB Cluster and in RonDB the cluster will continue to operate, a few transactions might be aborted in-flight and for InnoDB Cluster one need to use the new primary MySQL Server for updates. RonDB will be immediately available for traffic again as soon as the node failure have been discovered.

At recovery after a complete cluster crash InnoDB Cluster will not lose any transactions if sync_binlog was set to 1. There is a chance for lost transactions when going from the group replication layer to the relay log, it is still not absolutely safe. RonDB will lose up to a few seconds of transactions in this case, but the recovered data will always be transaction consistent commit point. InnoDB Cluster will also lose a few transactions when sync_binlog is set to 0.

NDB have been in production usage since 2004, there are currently many thousands of clusters in production usage in highly available applications and probably a lot more than that. RonDB is in production operation at customers of the Hopsworks feature store and also used by a number of community users. InnoDB Cluster is a product released as GA first in December 2016. Both are actively developed.

What clustering solution to select#

InnoDB Cluster is designed for users that use MySQL together with InnoDB. It is a natural extension of MySQL Replication to turn it into a high availability solution. It competes very well with other clustering solutions in the MySQL area, it does a very good job in providing a good high availability solution for the NoSQL area. It will serve as a natural extension for people currently using MySQL Replication with InnoDB to move to a new replication architecture.

Selection based on availability#

The most important reason to choose RonDB is the higher availability it provides. Both solutions support a high availability, RonDB have been battle-tested for 15 years.

RonDB requires only two replicas to stay available whereas InnoDB Cluster requires 3 replicas to stay highly available.

RonDB supports online reorganisation of data when new nodes are added. RonDB supports adding and dropping indexes online and adding new columns can be done online. RonDB is designed for constant uptime for both read and write. RonDB supports online upgrade to a new version with no downtime while still handling reads and writes.

RonDB can be combined with MySQL Replication for disaster recovery and for any changes not possible to handle in an online cluster. These clusters can even run in active-active mode with conflict detection and resolution.

InnoDB Cluster can be combined with asynchronous replication to other clusters. It is possible to replicate from RonDB to InnoDB Cluster and vice versa.

Selection based on real-time behaviour#

RonDB was designed for predictable response time and have gone through tough tests by demanding users to prove this. It can provide 100% of the responses within time limits of down to hundreds of microseconds for demanding telecom applications and financial applications.

Many of our users require 100% of the responses of complex transactions to complete within 10 milliseconds, this makes it impossible to use a solution based on disk-based storage.

InnoDB Cluster relies on one replication channel that handles one transaction at a time. If a large transaction is committed, this will affect the latency of all transactions queued up behind this long transaction.

Selection based on ease of application development#

When using an application designed for InnoDB there are many advantages in using InnoDB Cluster. It is easier to design against RonDB in that all MySQL Servers are equal, no special master must be used for updates and all MySQL Servers see the most current data.

The consistency model of RonDB is that all applications see the updates immediately. The consistency model in InnoDB Cluster uses eventual consistency. This means that you cannot be certain to see your own updates. This makes application development considerably more difficult.

Selection based on scalability#

Both solutions scale very well for reads, for write scaling RonDB will scale to many node groups (== shards) without changing the application code.

RonDB can combine normal sharded tables that are synchronously updated with global tables that are fully replicated in the cluster for scalable application development.

RonDB scales to tens of millions of update transactions per second, InnoDB Cluster scales to tens of thousands of update transactions per second.

Selection based on data size#

InnoDB Cluster is able to store more data per server, however since each server needs to contain the full data set it is limited to the size of one MySQL Server.

At the same time RonDB can scale to many data nodes using hundreds of gigabytes per data node, the total size possible to store in RonDB and in InnoDB Cluster is on the same order of magnitude. If one uses disk data as well for much of the data in RonDB one can store terabytes per data node. RonDB has a more reliable recovery algorithm handling large data sets.

Selection based on storage engine feature set#

Both RonDB and InnoDB supports all character sets supported by MySQL.

RonDB has a limit on row sizes, the fixed size columns can be at most 8052 bytes. The total row size can be at most 14 kBytes.

InnoDB supports fulltext indexes, RonDB doesn’t. Both RonDB and InnoDB supports GIS, only InnoDB supports a GIS index. Both RonDB and InnoDB supports the JSON data type and indexes on virtual stored columns, InnoDB also supports storing it as a virtual column and having an index on it. InnoDB stores all data using a page cache and can thus scale to data sizes covered by the disks, RonDB stores data in memory as default to ensure that RonDB always delivers predictable response time. RonDB can store non-indexed fields using a page cache backed by data on disks. Both RonDB and InnoDB uses tablespaces to handle disk pages.

InnoDB runs all queries using a single thread that runs inside the MySQL Server. RonDB runs queries in a number of threads that cooperate and queries are automatically parallelised when more than one partition is used in the query. RonDB supports pushing down joins into the data node where they will execute with some level of parallelism.

InnoDB uses a clustered B-tree index for primary keys and separate B-tree index tables for secondary index. RonDB always uses a main memory hash index for primary keys and unique keys. Ordered index is implemented using a main memory T-tree implementation. The unique indexes are global and stored in a separate table completely handled by RonDB, the primary key hash index and the ordered indexes are part of each partition.

RonDB and InnoDB uses slightly different consistency models as explained in the previous chapter on differences between RonDB and InnoDB.

Final words#

NDB was designed for telecom servers that require constant uptime, it was designed for scalability to a large number of nodes while still maintaining consistency of its data. Thus if you are aiming for a scalable application and plan to grow it to hundreds of nodes and want your application to see one common view of data then RonDB is a good hit.

Not surprisingly the original requirements in the design of RonDB are the key factors that will play in favor of RonDB in this selection.

There are many areas where InnoDB Cluster will be beneficial. It will be a more natural alternative for existing InnoDB applications that require the feature set available in InnoDB.

Note that there are many areas where RonDB is best used without mentioning its connection to MySQL. One such area is the implementation of file systems and block devices based on RonDB. Another area is for designing scalable and highly available LDAP servers. In such situations an SQL database would not even be an option. Most of the telecom servers are areas where only RonDB is a real option.

For file servers the reason RonDB is such a useful solution is based on its division of the Query Server and the Data server. A file server solution will seldomly require any SQL queries, they would only be unnecessary overhead. These applications are best built using the APIs providing direct APIs to the data nodes of RonDB. The easiest to use of those APIs is ClusterJ for Java access. The fastest is the C++ NDB API. It is possible to access the data nodes directly from JavaScripts using the NodeJS API.

Newer developments also makes it possible to access RonDB through various REST calls.