RonDB and Kubernetes / Helm#
RonDB can be managed through Kubernetes. We have used Helm to package RonDB for Kubernetes. The RonDB Helm GitHub repository is available under the Apache 2.0 license. This can be used to create RonDB clusters on AWS, Azure, Google Cloud, Oracle Cloud, OVH and many other cloud platforms as well as a Kubernetes cluster on your own hardware.
The RonDB Helm chart supports:
-
Create custom-sized RonDB cluster
-
Create custom-sized multi-AZ RonDB clusters
-
Scaling number of data node replicas up and down
-
Horizontal auto-scaling of MySQL Servers
-
Horizontal auto-scaling of REST API servers
-
Backup to Object Storage (e.g. S3)
-
Restore from Object Storage (e.g. S3)
-
Global Replication (cross-cluster replication)
Setting up RonDB using minikube
#
To setup a RonDB cluster using Kubernetes always starts with setting up
a Kubernetes cluster. For development purposes the easiest manner to do
this is to use minikube
.
Here is a very simple command to start a container inside minikube
that should work on machines with at least 16GB of free memory and 10
CPUs available for running the RonDB cluster.
minikube start \
--driver=docker \
--cpus=10 \
--memory=16000MB \
--cni calico \
--extra-config=kubelet.kube-reserved="cpu=500m"
Kubernetes 1.27 added a new feature that’s useful for RonDB. This
feature is called Static CPU Manager
. It makes it possible to lock a
pod to a set of the CPUs. RonDB will in this case automatically discover
the CPUs it has available.
minikube start \
--driver=docker \
--cpus=10 \
--memory=16000MB \
--cni calico \
--feature-gates="CPUManager=true" \
--extra-config=kubelet.cpu-manager-policy="static" \
--extra-config=kubelet.cpu-manager-policy-options="full-pcpus-only=true" \
--extra-config=kubelet.kube-reserved="cpu=500m"
The below command can be used to enable metrics on CPU usage in
minikube
.
RonDB needs Persistent Volumes to store the database, the below command
ensures that minikube
can automatically create those Persistent
Volumes and Persistent Volume Claims required by RonDB Helm charts.
The following installs the cert-manager
which is a webhook required by
the Nginx Ingress controller.
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.3/cert-manager.yaml
Now it is time to install the Nginx controller:
helm upgrade --install rondb-ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace=rondb-default \
--create-namespace \
--set "tcp.3306"="rondb-default/mysqld:3306" \
--set "tcp.4406"="rondb-default/rdrs2:4406"
Now we are ready to deploy our RonDB cluster. The following command deploys it.
You can follow the status of this command using the following command:
If you want to look at logs of one of those pods the following command handles that.
Replace <pod-name>
by the pod you want to investigate.
You can generate and verify data using the following commands:
helm test -n rondb-default my-rondb --logs --filter name=generate-data
helm test -n rondb-default my-rondb --logs --filter name=verify-data
When you are done using the RonDB cluster you can shut it down with:
Tools such as Lens
and k9s
can be used to monitor the Kubernetes
cluster.
Configuring RonDB Helm in minikube
environment#
In the previous section we showed how to set up a RonDB cluster in
minikube
. The final step used a YAML
file that described the
configuration of the RonDB cluster. Now let us look into the content of
this file.
benchmarking:
dbt2:
numWarehouses: 4
runMulti: |
# NUM_MYSQL_SERVERS NUM_WAREHOUSES NUM_TERMINALS
2 1 1
2 2 1
2 2 2
runSingle: |
# NUM_MYSQL_SERVERS NUM_WAREHOUSES NUM_TERMINALS
1 1 1
1 2 1
1 4 1
1 4 2
sysbench:
rows: 100000
threadCountsToRun: 1;2;4;8;12;16;24;32
clusterSize:
activeDataReplicas: 2
maxNumMySQLServers: 2
maxNumRdrs: 1
minNumMySQLServers: 2
minNumRdrs: 1
numNodeGroups: 1
isMultiNodeCluster: false
resources:
limits:
cpus:
benchs: 2
mgmds: 0.2
mysqlds: 3
ndbmtds: 2
rdrs: 2
memory:
benchsMiB: 500
ndbmtdsMiB: 3300
rdrsMiB: 500
requests:
cpus:
benchs: 1
mgmds: 0.2
mysqlds: 1
rdrs: 1
memory:
benchsMiB: 100
rdrsMiB: 100
storage:
diskColumnGiB: 4
redoLogGiB: 4
undoLogsGiB: 4
rondbConfig:
EmptyApiSlots: 2
MaxNoOfAttributes: 8000
MaxNoOfConcurrentOperations: 200000
MaxNoOfTables: 384
MaxNoOfTriggers: 4000
MySQLdSlotsPerNode: 4
ReplicationMemory: 50M
ReservedConcurrentOperations: 50000
SchemaMemory: 200M
SharedGlobalMemory: 300M
TransactionMemory: 300M
terminationGracePeriodSeconds: 25
Cluster Size section#
This section describes the number of Pods
of various types.
The first thing to consider is the number of data nodes. This is
determined by two variables. The first is numNodeGroups
, this is
mostly equal to 1 except for very large clusters. This parameter cannot
be changed once the cluster is started. The second parameter is
activeDataReplicas
. This parameter can be varied from 1 to 3 and can
be changed, in the example above it is set to 2. The number of RonDB
data nodes will be numNodeGroups
multiplied by activeDataReplicas
,
in this case 1*2=2
.
The activeDataReplicas
can be changed, but currently no auto-scaling
rule is applied here. Thus only manual scaling is supported.
Next we specify the number of MySQL Servers, here we have two numbers,
the minimum number (minNumMySQLServers
) and the maximum number
(maxNumMySQLServers
). Kubernetes can decide to increase or decrease
the number of MySQL servers within these limits, dependent on some
condition.
The auto scaling rule is based on CPU usage. When the CPU usage is above 70% Kubernetes will react to that condition and start a new MySQL Server. It will take about 1-2 minutes before the new MySQL Server is up and running after the load has gone beyond the limit.
The same principle applies to REST API servers (RDRS servers, the binary
is called rdrs2 in 24.10). The minimum number of REST API servers are
set in minNumRdrs
and the maximum in maxNumRdrs
.
These configurations represents the most important parameters when setting up a RonDB cluster. With RonDB data nodes, MySQL Servers and REST API servers available, we can access RonDB using MySQL clients, REST API clients using a number of different endpoints to retrieve rows using key lookups and RonSQL to retrieve aggregate data using aggregate SQL queries on a single table.
It is possible to access RonDB data nodes also from other applications
written specifically towards the NDB API. In Hopsworks, such an example
is HopsFS where name nodes use the NDB Java API (ClusterJ) to retrieve
and update information from RonDB. These NDB API applications need to
connect using the EmptyApiSlots
covered in the RonDB configuration
section.
Resource sections#
The previous section on Cluster Size
specifies the number of Pods
of
various types and how they can scale. However Kubernetes can also scale
the resource usage of a specific Pod
. Also here we have minimum and
maximum settings. The resource limit section specifies the maximum
settings and the resource request section specifies the minimum.
Kubernetes will scale up and down resource usage within those limits for
each Pod
.
For the RonDB data nodes (ndbmtd
we only specify the maximum CPU and
memory setting. This is interpreted as a fixed value. It can still be
changed manually, but this will result in a restart of the affected
Pods
. Such a restart will be performed using a rolling restart to
ensure that the RonDB cluster is always available.
For storage, only the minimum is set. This too is interpreted as a fixed value that can only be changed manually using a rolling restart.
What we see here is that we can scale REST API servers (rdrs
) both in
terms of the number of as well as the number of CPUs used by the REST
API server. We could e.g. scale from 1 to 40 REST API servers and have
the CPU scale from 1 CPU to 16 CPU. In this scenario we would have a
setup that can automatically scale down to handle around 100k lookups
per second with 1 CPU on 1 REST API server and scale up to handle around
50-100M lookups per second using a total of 640 CPUs. MySQL Servers can
autoscale in a similar manner.
RonDB data nodes can also scale up and down in memory size, CPUs, storage and number of active replicas. All such changes requires a rolling restart, and must be triggered manually. This is by design, as a rolling restart may have bigger impact on the cluster.
Resource limit section#
Two things that are required in the resource limit section - the amount
of CPU and the amount of memory required by each Pod
. The resource
limit as mentioned is the maximum resource used.
The number of CPUs can be any number except for RonDB data nodes where
it needs to be an integer. RonDB data nodes (ndbmtd
) also cannot have
a setting in the resource requested section.
Memory sizes are specified in number of megabytes as suggested by the names.
In the cpus
section, RonDB management servers are named mgmds
, RonDB
data nodes are named ndbmtds
, MySQL Servers are named mysqlds
, REST
API servers are named rdrs
and Benchmark clients are named benchs
.
In the memory
section, RonDB data nodes are named ndbmtdMiB
, MySQL
Server nodes are named mysqldMiB
, REST API servers are named rdrsMiB
and Benchmark clients are named BenchMiB
. All memory resources are
specified in megabytes.
Resource requested section#
The requested resources are as mentioned the minimum. In this section we
also set the storage sizes. The three storage sizes diskColumnGiB
,
redoLogGiB
and undoLogsGiB
are currently added together to calculate
the total size of Persistent Volume for the RonDB data node pod. The
Persistent Volume of a pod cannot change size. Thus it is important that
the storage resources are the maximum size that will ever be required.
Number of CPUs and memory sizes can be changed, but currently Kubernetes
have issues in changing sizes of Persistent Volumes.
Storage sizes are specified in number of gigabytes as suggested by the names.
Storage configurations are in binary SI units (i.e. 1G = 1GiB = 1024MiB).
RonDB configuration section#
The section rondbConfig
makes it possible to change the RonDB
configuration. There are numerous parameters here that can be used. Most
of them are related to memory usage and the number of cluster
connections that API nodes will use to connect to RonDB data nodes. Each
cluster connection (slot) will setup a socket to each of the RonDB data
nodes.
The parameters for memory usage has a default which is calculated based on the available amount of memory. If one is aware of how much memory is required of the various types it can free up memory that can be used for the actual database memory.
Parameter |
Description |
---|---|
MaxNoOfTables |
Setting this can be used to save memory space, indexes are also treated as tables here |
MaxNoOfAttributes |
Setting this can be used to save memory space |
MaxNoOfTriggers |
Setting this can be used to save memory space |
TransactionMemory |
TransactionMemory is memory used for key operations, scan operations and transactions |
SharedGlobalMemory |
SharedGlobalMemory is a memory pool that can be used by other memory areas |
ReplicationMemory |
ReplicationMemory is memory used for event handling, mainly used by Global Replication |
SchemaMemory |
SchemaMemory is memory used for metadata about tables, indexes, triggers ... |
DiskPageBufferMemory |
This memory is used as the disk page cache for disk columns |
MaxNoOfConcurrentOperations |
This limits the number of concurrent operations in one tc thread |
MaxDMLOperationsPerTransaction |
This limits the number of concurrent operations in a single transaction |
MySQLdSlotsPerNode |
This specifies the number of node ids that will be used by each MySQL Server |
RdrsSlotsPerNode |
This specifies the number of node ids that will be used by each REST API Server |
RdrsMetadataSlotsPerNode |
REST API servers can have connections to a specific metadata cluster |
EmptyApiSlots |
The number of node ids available for other NDB API nodes to use (e.g. HopsFS, OnlineFS) |
OsStaticOverhead |
The amount of memory we will leave to the OS |
OsCpuOverhead |
Added to memory left to OS after multiplying by number of CPUs |
InitialTablespaceSizeGiB |
The initial size of the tablespace used for disk columns |
TransactionInactiveTimeout |
After this many milliseconds a transaction is aborted when waiting for API node |
TransactionDeadlockDetectionTimeout |
After waiting for data nodes this long a transaction is aborted (milliseconds) |
HeartbeatIntervalDbApi |
Heartbeat interval from data node towards API node, 4 missed means failed node (milliseconds) |
HeartbeatIntervalDbDb |
Heartbeat interval from data node towards data node, 4 missed means failed node (milliseconds) |
TotalMemoryConfig |
Instead of discovering the amount of memory, we will use this amount of memory |
Benchmarking section#
By default no benchmarking section is provided. If a benchmarking
section is provided there is a parameter enabled
that defines whether
a benchmark should be executed automatically on the cluster and a type
parameter that defines which benchmark to run.
One can run automatically sysbench
, a single node DBT2 dbt2_single
,
a multi node DBT2 (dbt2_multi
) and a YCSB benchmark ycsb
.
One can also define configuration for the benchmark tests even if they are not automatically executed.
dbt2
#
numWarehouses
specifies the number of warehouses to create in RonDB.
runSingle
specifies one line for each test that DBT2 should run. Each
line requires 3 parameters, number of MySQL Servers, number of
warehouses and number of terminals. The number of threads executed in a
test is equal to the product of those 3 parameters. runSingle
should
always have 1 MySQL Server. runMulti
can have any number of MySQL
Servers.
sysbench
#
The sysbench
section defines running an OLTP RW test in sysbench. The
first parameter is threadCountsToRun
, this parameter is a
semicolon-separated list of the number of threads to use in each run.
The second parameter is the rows
parameter. This parameter specifies
the number of rows to insert into RonDB and how many rows to use in the
benchmark run. The third and final parameter is minimizeBandwidth
. If
this parameter is set the scans will filter away most rows and only
return a single row instead of 100. This turns sysbench
into a CPU
benchmark instead of a network benchmark.
ycsb
#
The only parameter YCSB needs is the definition of the ycsb.usertable
table. This table is the table used by the YCSB benchmark.
Other parameters#
Here is a set of parameters that can be used when setting up the RonDB cluster. There are more parameters as well, but those will be explained in the context of setting up a production cluster in OVH.
Parameter |
Description |
---|---|
isMultiNodeCluster |
Set this variable to true when running on a Kubernetes cluster with more than one node |
staticCpuManagerPolicy |
Setting this variable to true means that RonDB data node will run on exclusive set of CPUs |
terminationGracePeriodSeconds |
Set this variable to true when running on a Kubernetes cluster with more than one node |
imagePullSecrets |
Name of image pull secret if required |
imagePullPolicy |
Policy for pulling RonDB images |
enableSecurityContext |
Enables security context for RonDB pods |
serviceAccountAnnotations |
Enables security context for RonDB pods |
images |
In this section you can define where you find the docker images for RonDB |
The top-level section timeoutsMinutes
can be used to assign timeout
periods. The first parameter is singleSetupMySQLds
which is the
timeout in minutes for a single MySQL Server to start up. In clusters
with exceptionally many tables and indexes this timeout might need to be
increased.
The second parameter is restoreNativeBackup
which is the maximum time
allowed for restoring data from a backup. If the backup is very large
and the resources to restore are small this time might need to be
increased, but this should be a very unlikely use case.
Setting up RonDB in a Kubernetes cluster#
There are many different Kubernetes environments available. In AWS one uses EKS, In Azure one uses AKS, in Google Cloud one uses GKE, in Oracle Cloud one uses OKE and OVH also offers a managed Kubernetes service. You can also set up a Kubernetes cluster on your own hardware in which case you might use e.g. the OpenShift platform managing your Kubernetes cluster.
We will not document how to set up a Kubernetes cluster in these environments, we will focus on describing efforts that are generic to all environments.
Setting up a Hopsworks cluster uses three different node types. These are Head nodes, Worker nodes and RonDB nodes.
The RonDB nodes are intended explicitly to run RonDB data nodes. These nodes require more memory, more control of the CPU usage and they are used in a less flexible manner compared to other nodes.
MySQL Server pods, REST API servers and RonDB management servers can run on any Head node. There are very few specific requirements on these pods other than the CPU available to them.
Worker nodes are intended for applications running on top of the Hopsworks application platform.
The RonDB Helm chart uses labels and taints to ensure that the proper nodes are used for the various pods.
This means that after creating the Kubernetes cluster nodes we have to set labels on all nodes and we have to set taints on the RonDB and the Head nodes. In the example below we have 2 RonDB nodes and 3 Head nodes.
kubectl taint nodes db-node-1 db-node-2 node=RonDB:NoSchedule
kubectl label nodes db-node-1 db-node-2 node=RonDB
kubectl taint nodes head-node-1 head-node-2 head-node-3 node=Head:NoSchedule
kubectl label nodes head-node-1 head-node-2 head-node-3 node=Head
External load balancers are available in most of the platforms. Below is an example of how to use annotations to define an external load balancer for MySQL Servers and for the REST API servers. This needs to be defined in the values file for the RonDB cluster.
meta:
mysqld:
externalLoadBalancer:
enabled: true
annotations:
loadbalancer.ovhcloud.com/class: "octavia" #not required for cluster running kubernetes versions >= 1.31
loadbalancer.ovhcloud.com/flavor: "small" #optional, default = small 200 MB/sec
rdrs:
externalLoadBalancer:
enabled: true
annotations:
loadbalancer.ovhcloud.com/class: "octavia" #not required for cluster running kubernetes versions >= 1.31
loadbalancer.ovhcloud.com/flavor: "small" #optional, default = small 200 MB/sec
Next the values file need to implement the nodeSelector
and
tolerations
. The nodeSelector
ensures that the pods are only started
on nodes with the correct label. The tolerations
ensure that no other
pod is allowed to use those nodes (through the use of NoSchedule
below).
nodeSelector:
mgmd:
hw-node: Head
mysqld:
hw-node: Head
rdrs:
hw-node: Head
ndbmtd:
hw-node: RonDB
tolerations:
mgmd:
- key: "hw-node"
operator: "Equal"
value: "Head"
effect: "NoSchedule"
mysqld:
- key: "hw-node"
operator: "Equal"
value: "Head"
effect: "NoSchedule"
rdrs:
- key: "hw-node"
operator: "Equal"
value: "Head"
effect: "NoSchedule"
ndbmtd:
- key: "hw-node"
operator: "Equal"
value: "RonDB"
effect: "NoSchedule"
After these preparatory steps we are ready to start the cluster using
helm install
.
Configuring RonDB to take backups#
When you create a RonDB cluster or when you upgrade you can decide it should run regular backups. Currently backups have to be stored in S3 or S3-compatible storage. Below is an example of how one would configure a RonDB cluster to use backups. This particular example comes from running the RonDB cluster in the OVH cloud using their S3-compatible storage.
backups:
enabled: true
pathPrefix: "rondb_backup"
schedule: "0 3 * * mon"
s3:
provider: Other
endpoint: "https://s3.bhs.io.cloud.ovh.net"
bucketName: "jun-25"
region: "bhs"
keyCredentialsSecret:
name: "aws-credentials"
key: "aws-access-key-id"
secretCredentialsSecret:
name: "aws-credentials"
key: "aws-access-key"
serverSideEncryption: null
enabled
needs to be set to true to activate the backups. The
pathPrefix
is the prefix of the RonDB backup in the configured bucket.
Here we set the default be to run the backup at 3.00 AM every monday
morning and call the pathPrefix
rondb_backup
.
We also need to define the S3 bucket, this includes setting the
Provider
, the web endpoint
, the bucketName
and the region
where
the S3 storage will be located. We also need to provide credentials and
the AWS access key id. Finally we need to decide whether to use
serverSideEncryption
or not.
Every backup will get a unique id, thus several backups can be stored in the bucket. Kubernetes will not delete old backups.
Configuring RonDB to restore from a backup#
Restoring a backup can only be done on a new RonDB cluster. RonDB Helm does not allow restoring a backup in an already running RonDB cluster.
In this case one adds the below section in the values file. If one wants
this section and doesn’t want it to restore from backup then one can set
backupId
to null.
restoreFromBackup:
backupId: 1234567
pathPrefix: "rondb_backup"
objectStorageProvider: s3
excludeDatabases: []
excludeTables: []
s3:
provider: Other
endpoint: "https://s3.bhs.io.cloud.ovh.net"
keyCredentialsSecret:
name: "aws-credentials"
key: "aws-access-key-id"
secretCredentialsSecret:
name: "aws-credentials"
key: "aws-access-key"
bucketName: "jun-25"
region: "bhs"
The settings are the same as when defining a backup strategy. We don’t
need any Cron job schedule and enabling it is done through setting
backupId
. One can also exclude certain databases and certain tables.
If a table is to be excluded then use the syntax database.table.
To discover the backup id one can look into the bucket, it is also logged in the logs. These two commands could be useful to get the backup id (set namespace variable to your namespace). They obviously need to be executed before the cluster is destroyed.
POD_NAME=$(kubectl get pods -n $namespace --selector=job-name=manual-backup -o jsonpath='{.items[?(@.status.phase=="Succeeded")].metadata.name}' | head -n 1)
BACKUP_ID=$(kubectl logs $POD_NAME -n $namespace --container=upload-native-backups | grep -o "BACKUP-[0-9]\+" | head -n 1 | awk -F '-' '{print $2}')
echo $BACKUP_ID
Configuring RonDB for Global Replication#
The RonDB Helm chart supports setting up Global Replication between
multiple RonDB clusters. To handle Global Replication we need to add two
new node types. The first is the binlogServers
. This node type is a
specialised MySQL Server that will run in the active RonDB cluster. It
will collect all changes in the active cluster and make them available
to fetch from the passive RonDB cluster.
The second new node type is the replicaApplier
node type. This is
again a specialised MySQL Server. It is running in the passive cluster
retrieving changes from the active RonDB cluster and applying them into
the passive RonDB cluster.
RonDB can be configured in Active-Passive setups, but it can also run in
Active-Active setups. In the Active-Active setups replication goes in
both directions. Thus both binlogServers
and replicaAppliers
will
run in both RonDB clusters.
Normal MySQL Servers in RonDB are stateless, thus they can be restarted
with initial restart and it is not a problem. The binlogServers
store
the binlog and this requires persistent storage, similarly the
replicaApplier
stores the relay log and this requires persistent
storage. Thus they need to be handled with a bit more care than the
normal MySQL Servers.
Setting up the Active RonDB cluster#
There are two sections related to GlobalReplication in the Active RonDB
cluster. The first is to define the globalReplication
. This sets the
cluster number which needs to be unique over the RonDB clusters involved
in the same Global Replication. Since we are the active RonDB cluster we
need to define the parameters attached to the primary side of the Global
Replication.
globalReplication:
clusterNumber: 1
primary:
binlogFilename: binlog
enabled: true
expireBinlogsDays: 1.5
ignoreDatabases: []
includeDatabases: []
logReplicaUpdates: false
maxNumBinlogServers: 2
numBinlogServers: 2
The settings we have to set are binlogFilename
, enabled
,
expireBinlogsDays
, ignoreDatabases
, includeDatabases
,
logReplicaUpdates
, maxNumBinlogServers
, numBinlogServers
.
logReplicaUpdates
is required if we set up a chain of more than two
RonDB clusters, setting it means that also applied updates will be
logged in the binlog. In a setup with only 2 RonDB clusters it isn’t
necessary to do this.
maxNumBinlogServers
impacts the config.ini
in that it allocates the
node ids that are required by the binlog servers. numBinlogServers
is
the actual number of binlog servers that will run in the cluster given
that globalReplication
is enabled. numBinlogServers
can be increased
up to the value of maxNumBinlogServers
.
One also needs to define the binlogServers
in the meta
section of
the RonDB values file. Here one decides whether to use end-to-end TLS
encyption for the connections between the passive and active RonDB
clusters. One also sets the port numbers used (normally the default
MySQL Server port number). The TLS setup requires setting CA and
certificate information.
The reason for using an external load balancer is that it makes it possible to have a single IP address. The external load balancer also creates an external IP address that can be used to access the binlog servers from the passive RonDB cluster. Otherwise one needs to perform port forwarding from an external port and external IP address to be able to communicate with the binlog servers.
meta:
binlogServers:
externalLoadBalancers:
annotations: {}
class: null
enabled: true
namePrefix: binlog-server
port: 3306
headlessClusterIp:
name: headless-binlog-servers
port: 3306
statefulSet:
endToEndTls:
enabled: true
filenames:
ca: null
cert: tls.crt
key: tls.key
secretName: binlog-end-to-end-tls
supplyOwnSecret: false
name: mysqld-binlog-servers
Setting up the Passive RonDB cluster#
The setup in the passive RonDB cluster also requires a
globalReplication
section. But here we instead define the secondary
cluster.
globalReplication:
clusterNumber: 2
secondary:
enabled: true
replicateFrom:
binlogServerHosts: [13.123.11.12]
clusterNumber: 1
ignoreDatabases: []
ignoreTables: []
includeDatabases: []
includeTables: []
useTlsConnection: true
Most things are self-explanatory here, the IP address provided is just an example of what might point to the external load balancer in the active RonDB cluster.
In this case we need to define the replicaAppliers
section. This is
more or less the same as the setup for the binlog servers. There is no
need to have multiple replica appliers, if one fails Kubernetes will
ensure it starts up again on some other node.
replicaAppliers:
headlessClusterIp:
name: headless-replica-appliers
port: 3306
statefulSet:
endToEndTls:
enabled: true
filenames:
ca: null
cert: tls.crt
key: tls.key
secretName: replica-applier-end-to-end-tls
supplyOwnSecret: false
name: mysqld-replica-appliers
Configuring RonDB with monitoring#
Monitoring of RonDB happens through a mysqld exporter that uses one of
the MySQL Servers to query tables in the ndbinfo
schema. It exposes
this information at an HTTP endpoint.
A Prometheus server can then regularly fetch and store this information. A Grafana server queries the metrics from Prometheus and makes the RonDB dashboards available.
This requires a mysql
section in the RonDB Helm chart: This section
needs to provide information about the users, an exporter
section and
a section with configurations for the MySQL client.
mysql:
credentialsSecretName: "SecretName"
users:
- username: "username"
host: "%"
privileges:
- database: "*"
table: "*"
withGrantOption: true
privileges: ["ALL"]
exporter:
enabled: true
config:
maxConnections: 512
maxConnectErrors: 9223372036854775807
maxPreparedStmtCount: 65530
Resource configurations#
In addition to the items already discussed in the above sections, we
also need to define CPU resources for a mysqld exporter if it has been
configured. The restore
pod used to restore from a backup similarly
requires CPU resources defined. One also need to define the amount of
memory required by the mysqld exporter.