If it does belong to a system table, ndb_restore decides to skip restoring it and prints a 'Skipping fragment' log message. Backup data is restored in fragments, so whenever a fragment is found, ndb_restore checks if it belongs to a system table. Thus these tables should not be overwritten by the data from a backup. System tables referred to here are tables used internally by NDB Cluster. Restore skips restoration of system table data. Shell> bin/ndb_restore -n node_id -b backup_id -m -disable-indexes -ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directoryįrom the above image, we can see that restore went successful. Upon successful completion, I will issue the data restore. In this example, I am issuing meta data restore and disable index from Data Node 1 only for once. This restore command can be issued from any data node or can be from a non-data node as well. Meta data restore, disable index and data restore can execute at one go, or can be done in serial. Let’s start the restoration of meta data. Since restore will fail while restoring unique and foreign key constraints that are taken from the backup image so user must disable the index at the beginning and once restore is finished, again user need to rebuild the index.At the end, it executes all the transaction logs (*.log) rollback or roll forward to make the database consistent.inserts all the records into the tables in the database. Then it restore the data files (*.Data) i.e.It first restore the meta data from the *.ctl file so that all the tables/indexes can be recreated in the database.
#COPY SYNCOVERY INI FILE HOW TO#
If you are wondering how to setup a NDB Cluster, then please look into my previous blog here. To demonstrate this feature, let’s create a NDB Cluster with below environment. Let’s look at NDB Cluster backup and restore feature through an example: The CTL file is used to restore the schema, the DATA file is used to restore most of the data, and the LOG file is used to ensure snapshot consistency. At the end of a backup, each data node has recorded a set of files (*.data, *.ctl, *.log), each containing a subset of cluster data.ĭuring restore, each set of files will be restored to bring the cluster to the snapshot state. Data is distributed across all the data nodes, and the backup occurs in parallel across all nodes, so that all data in the cluster is captured. The scanning and logging are synchronised so that the backup is a snapshot at a single point in time. At the same time, a log of ongoing changes is also recorded. When a backup starts, each data node scans the set of table partitions it owns, writing their records to its local disk. At any time, each partition is logically owned by just one node in one nodegroup, which is responsible for including it in a backup. Different nodegroups contain different sets of partitions. All data nodes in a nodegroup (up to four) contain the same sets of partitions, kept in sync at all times. The data nodes are logically grouped into nodegroups. In NDB Cluster, tables are horizontally partitioned into a set of partitions, which are then distributed across the data nodes in the cluster.
NDB Cluster Backup & Restore concept in brief: