Skip to content

Restore#

In this chapter, we will discuss how to restore a backup. Restore uses a program called ndb_restore. This program always has as input, files generated by a backup from RonDB.

In its normal usage, ndb_restore is applied to a system that currently is empty. Thus before restoring it is assumed that the user has performed an initial start of the cluster. If the cluster contains data, the simplest method to remove it is to remove all user databases.

It is possible to use restore also as an import. It is however essential to understand that the tables restored in this process are not usable until the entire restore process has completed. Trying to use them before this gives unpredictable behavior. Other tables not affected by the restore can still be used predictably.

To start this program we need to know at the very least the following things:

  1. Placement of RonDB management server(s)

  2. The node ID of the data node being restored

  3. The backup ID

  4. The path to the backup files

To restore we go through several steps:

  1. Restore metadata

  2. Disable indexes

  3. Restore data for each data node

  4. Rebuild indexes

Restoring metadata#

Here we have two options. We can restore the metadata from the backup taken in RonDB or we can use the metadata created by mysqldump.

Using native backup#

One thing to consider when restoring metadata is how to handle tablespaces and the logfile group used for disk columns. If those already exist or if they were created as part of the initial start of the cluster, we need to ensure that we don’t attempt to create those as part of the restore process. We will assume here that they already exist, if not simply remove the parameter no-restore-disk-objects.

ndb_restore \
    --connectstring=mgm_host \
    --nodeid=N \
    --backupid=B \
    --backup-path=/backup/path/dir \
    --restore_meta \
    --no-restore-disk-objects

This command is only executed once for any of the nodes that produced the backup. It must be executed before the invocations of ndb_restore using --restore_data.

After executing this ndb_restore command it is important to issue a SHOW TABLES command to ensure that metadata is synchronized with the various MySQL servers in the cluster and that the new tables are properly replicated to the replica side.

Using MySQL dump#

The other option is to restore the metadata from an SQL file that was produced by mysqldump. This command uses the MySQL client and the SQL file to send SQL commands to a MySQL Server that recreates those metadata objects as below:

This is the recommended approach since it is synchronized with Global Replication and automatically synchronizes the MySQL servers in the cluster such that they have the same metadata.

mysql < restore_meta.sql

The above command can also be used if it is desirable to restore other MySQL objects.

Disabling indexes#

We recommend disabling indexes before restoring data. If there are unique indexes in the backup this is a must. The reason is that otherwise the unique constraint cannot be guaranteed.

Remember that a backup first scans the tables and records rows in the backup files. Since the scan can go on for a long time there is no consistency among the records in this scan. The consistency comes after also applying the backup log file where all changes since the start of the backup are recorded. When this is performed we have restored a consistent database. Restoring the data happens in two stages. First, the scanned rows are inserted and then, the log is applied. Thus during insertion of scanned rows, the unique index constraint is not guaranteed to be valid. Therefore it is a must to disable unique indexes before restoring data and to rebuild the unique indexes after the data have been fully restored.

For ordered indexes, it is not a must to disable indexes before restoring the data. At the same time the parameter --disable-indexes applies to both unique indexes as well as ordered indexes. Second, it is faster to use disable indexes followed by rebuild indexes also for ordered indexes. My recommendation is to always use this procedure.

The second reason to disable indexes while restoring data is that it is faster to perform those three steps compared to restoring data with indexes active. Rebuild indexes can be parallelized even beyond the number of ldm threads and also rebuilding indexes requires no communication while doing it and there is a higher chance of CPU cache hits while rebuilding indexes compared to when you insert one row at a time.

Commands to run#

All phases except the --restore-data phase require only one invocation of ndb_restore.

Here is the set of commands used to restore a 2-node cluster where the data nodes’ node IDs are 1 and 2 and the backup ID is 10. The following is the same whether we have 2 data nodes with 1 node group (2 replicas) or 2 data nodes with 2 node groups (1 replica each):

  1. Restore metadata (fully replicated):

    ndb_restore
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --restore_meta \
        --no-restore-disk-objects
    
  2. Disable indexes (fully replicated):

    ndb_restore
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --disable-indexes
    
  3. Restore data for each data node (can be run in parallel):

    ndb_restore
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --restore-data
    
    ndb_restore
        --connectstring=mgm_host \
        --nodeid=2 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --restore-data
    

    To restore data for a complete cluster we have to invoke ndb_restore one time for each node in the cluster producing the backup. If there were 4 data nodes in the cluster where the backup was taken, we need to invoke ndb_restore 4 times to restore the data.

  4. Rebuild indexes (fully replicated):

    ndb_restore
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --rebuild-indexes
    

During the restore process used above the indexes are not updated, this is however not known by the MySQL Server. Using the MySQL Server during the restore process can provide inaccurate data since only the primary key index is correct, thus only primary key lookups and full table scans provide correct data.

During the restore process the data in the tables isn’t consistent.

Partial restore#

If only a subset of the tables or databases are to be restored it is possible to specify those with the parameters:

  • --exclude-databases

  • --include-databases

  • --exclude-tables

  • --include-tables

Exclude databases and tables are used to specifically specify all the databases and tables that must not be restored, all other tables and databases are restored. Include databases specify those tables and databases that are to be restored, no other tables and databases are restored.

When specifying a table in include and exclude tables one must specify both database name and table name with a dot between them as dbname.tablename. One can specify these parameters multiple times or use a comma separated list of database and table names.

Restore for upgrade#

Restoring a backup can be a method to perform various software upgrades and also changes to the schema. Thus the data must not be entirely consistent with the new schema.

Restoring for an upgrade means that we create tables that have differences to the original tables, but they are populated using the same data.

--exclude-missing-tables means that any tables that are missing when performing a restore is ignored. Normally the restore will fail if it tries to insert or change data in a table that haven’t been created before the restore of data starts. Similarly, we have --exclude-missing-columns that ensures that fields that have been removed are ignored when importing the data from the backup.

The parameter --lossy-conversions is set to indicate that it is ok to lose information in a field when inserting data from the backup. Examples could be converting from integer to unsigned integer, changing to strings of shorter length.

A few changes are allowed without any special startup parameters to ndb_restore. These are:

  1. Changing row format (FIXED to DYNAMIC and vice versa)

  2. Changing storage type from DISK to MEMORY and vice versa

  3. Changing the default value of a column

  4. Changing the partition key

To "promote" a column means to store it in a data type that is similar but larger. E.g. a string type that is longer. If we promote a fixed size CHAR column to a variable size column, we can specify the parameter --preserve-trailing-space. Thus the trailing spaces in the CHAR column that are required in a CHAR column are also transported into the VARCHAR column. To enable promotion of variables it is necessary to set the --promote-attributes parameter.

There is a limit to how many types of conversions can happen at the same time. We can convert columns from VARCHAR to TEXT columns, but it isn’t possible to convert from MEDIUMINT to BIGINT at the same time. Conversions from one VARCHAR to another VARCHAR and conversions from BINARY to VARBINARY can be performed without this restriction.

In older versions of NDB only fixed size strings was supported, when performing a restore of a backup from such an old NDB version ndb_restore will automatically convert VARCHAR fields to variable sized columns. If the old fixed size variant needs to be retained one should use the parameter --no-upgrade. It is most likely never needed.

The options --skip-broken-objects and --skip-unknown-objects can be used to ignore unknown objects and also some broken objects. This can happen in downgrade situations.

It is possible to change the name of a database during restore, the parameter --rewrite-database will provide the old database name as the first parameter and separated by a comma, the name of the new database name. E.g. --rewrite-database=dbold,dbnew. This option can be repeated for each database that should change name.

Setting the parameter --skip-table-check is equivalent to setting the parameters --lossy-conversions and --exclude-missing-columns and --promote-attributes and --skip-unknown-objects.

Restore as part of Global Replication#

Setting up a new backup cluster from an operational primary cluster usually starts with restoring a backup. There are some parameters specifically related to replication when performing a restore.

First of all, it is necessary to set the parameter --no-binlog. This makes sure that the restore process doesn’t generate any binlog entries. These would take up unnecessary space.

To ensure that the ndb_apply_status table on the backup cluster has the correct information we need to execute the final ndb_restore step with the flag --restore-epoch. This inserts a row into ndb_apply_status table with server_id=0 and the epoch number restored by the backup. It doesn’t matter in which command to execute this since the epoch number is available in all backup files.

The example in a previous section of restoring to a cluster with two data nodes will now be changed into the following parameters to support starting up Global Replication:

  1. Restore metadata (fully replicated):

    ndb_restore \
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --restore_meta \
        --no-restore-disk-objects \
        --no-binlog
    
  2. Disable indexes (fully replicated):

    ndb_restore \
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --disable-indexes \
        --no-binlog
    
  3. Restore data for each data node (can be run in parallel):

    ndb_restore \
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --restore-data \
        --no-binlog
    
    ndb_restore \
        --connectstring=mgm_host \
        --nodeid=2 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --restore-data \
        --no-binlog
    
  4. Rebuild indexes and restore epoch (fully replicated):

    ndb_restore \
        --connectstring=mgm_host \
        --nodeid=1 \
        --backupid=10 \
        --backup-path=/backup/path/dir \
        --rebuild-indexes \
        --no-binlog \
        --restore-epoch
    

More details on starting up replication channels are in the part on Global Replication.

Restoring Distributed Privileges#

Restoring distributed privileges can be done using the ndb_restore program. It is not done by default.

To also restore those tables, it is necessary to add the parameter --restore-distributed-privileges. This parameter must be specified in all invocations of ndb_restore. Also, the distributed privilege table needs to be created as a table and the data from various nodes must be restored.

Various other use cases for ndb_restore#

The normal (and default) use case is to restore the data into the cluster using the NDB API. This is handled by the above shown --restore_data parameter. You can also use the parameter --restore_meta parameter, in this case only the metadata of the backup is restored.

If you want to view the data before you restore it, you can use --print_data and --print_meta to view data or metadata respectively. -print_data only prints the scanned tables, not the logged changes, these are printed when using the parameter --print_log. The parameter --print is equivalent to setting all three of those print parameters.

ndb_restore can output SQL statements for all operations it performs. ndb_restore can also output the information in a CSV file. To do this first use the parameter --print_data followed by the option --tab. This creates a file called table_name.txt for each table in the backup.

It only prints the data files, not the backup log files. This approach have some limitations since it doesn’t make sense to write the log files to CSV files. These logs represent changes in the database since the start of the backup. Thus to restore something consistent using these CSV files requires that the database wasn’t updated during the execution of the backup that is restored (thus log is empty).

The output of the CSV files can be changed using:

--append means that we are appending to a file rather than writing a new file from scratch.

--fields-enclosed-by sets the character that can be used to separate fields.

--fields-terminated-by sets the character used to terminate a field.

--lines-terminated-by sets the character used to terminate a line.

--hex ensures that binary fields are printed in hexadecimal format.

Special parameters for ndb_restore#

The transactions used by ndb_restore are always a single row per transaction. These transactions are sent in batches with a parallelism of 128. To increase or decrease the parallellism set the --parallellism parameter.

To set the frequency of progress reports the --progress-frequency parameter can be used. The default is 0 which means no progress reporting. It is set in seconds. The verbosity of the output can be changed using the --verbose parameter. It can be set between 0 and 255 and the default is 1.