/contrib/famzah

Enthusiasm never stops


2 Comments

MySQL Galera Cluster: How many nodes do you really need?

The MySQL Galera Cluster is a fine piece of software which brings synchronous multi-master replication. This ensures high availability of your database. The following has been tested with Percona XtraDB Cluster.

In order to achieve a desired fault tolerance, we must examine and understand how the cluster responds to node failures. This Percona blog article gives some good insight, but some more clarifications are needed.

There are two different kinds of failures that may occur while the cluster is operating:

  • Simultaneous failure of two or more nodes.
  • One-by-one failure. This is the case when a node fails, the cluster notices this and reacts before another node fails, too. The usual time for reaction is 5 seconds which is controlled by the “suspect timeout” setting (evs.suspect_timeout).

When one or more nodes fail, the cluster reacts in a very clever way:

  1. The remaining alive nodes run a quorum vote, in order to determine if their count is >50% of the last cluster size.
  2. If the cluster can continue to operate with a quorum, it re-adjusts its size! This is a crucial feature which lets the cluster lose nodes one-by-one until only two active nodes are online.

❓ Now the question is, if we have a cluster with 3 data nodes, is it possible to lose 2 of them, and still continue to operate? The answer is “yes”, but only if you lose them one-by-one, and only if you run an additional arbitrator node in your cluster. Note that the arbitrator does not store any data but still participates in the whole network replication traffic, in order to be able to vote.

Three data nodes and an arbitrator node make a cluster with a size of four nodes. When one of the nodes fails, 3/4 of the nodes are alive which is >50% quorum and the cluster continues to operate by reducing its size to three. When a second node fails, 2/3 of the nodes are alive which is >50% quorum and the cluster continues to operate with a size of two. You cannot lose a third data node since you have only three initially anyway. 🙂

If you lose two data nodes simultaneously, or if you lose one data node and the arbitrator (total of two nodes) simultaneously, you are out of luck. Only 2/4 of the nodes are alive which is not >50% quorum. The cluster stops to operate, in order to prevent a possible split-brain situation.

Here is a summary on how many nodes you can lose with the two different cluster configurations. Note that the arbitrator counts as a regular node when it comes to losing it:

  • 3 nodes with an arbitrator (initial cluster size is 4):
    • You can lose 2 nodes in a one-by-one fashion.
    • You can lose only 1 node simultaneously. Losing 2 nodes simultaneously kills your whole cluster.
  • 3 nodes without an arbitrator (initial cluster size is 3):
    • You can lose only 1 node even in a one-by-one fashion.
    • You can lose only 1 node simultaneously. Losing 2 nodes simultaneously kills your whole cluster.

So running an arbitrator in a three-node MySQL Galera Cluster makes total sense, if you can allocate one more separate machine with the same network capabilities.

✩ Note that regular MySQL node shutdowns are handled differently by the cluster. When a node leaves the cluster via a normal shutdown, it informs all members of the cluster about this. Therefore, it should be safe to shut down even 2 out of your 3 data nodes simultaneously.

Advertisements


Leave a comment

Configure MySQL Galera Cluster to listen on a specific IP address

If you have a separate private network for your MySQL Galera Cluster, it is a good security practice to configure it to listen only on the private IP address. This way you have less firewall settings to set up and rely on. The following has been tested with Percona XtraDB Cluster.

A MySQL Galera Cluster listens all the time on two different ports, in order to provide the following services:

  • port 4567 – Galera Cluster communication
  • port 3306 – MySQL client connections and State Snapshot Transfer that use the “mysqldump” method

While those two services could be bound on different IP addresses, they are usually using the same IP address. Each of these services are configured using different MySQL settings in “my.cnf”:

  • port 4567 – “wsrep_cluster_address=gcomm://%CLUSTER_IP1%,%CLUSTER_IP2%,%CLUSTER_IP3%?gmcast.listen_addr=tcp://%THIS_NODE_LISTEN_IP%:4567”
  • port 3306 – “bind-address=%THIS_NODE_LISTEN_IP%”

If we had a cluster, and the current node has an IP address of 169.254.50.1, we would have the following in “my.cnf” regarding the networking configuration:

wsrep_provider_options="gmcast.listen_addr=tcp://169.254.50.1:4567"
wsrep_node_address=169.254.50.1
bind-address=169.254.50.1

There are two other ports which are opened on demand: port 4568 for Incremental State Transfer, and port 4444 for all other State Snapshot Transfer operations. Those two ports are controlled by “wsrep_sst_receive_address” and the “ist.recv_addr” option in “wsrep_provider_options”, as explained at this page. The default listening IP address is the same as configured for “wsrep_node_address”, and therefore doesn’t need any additional tweaks.

EDIT: It turns out that regardless of what is specified for the above two options for ports 4444 and 4568, at least the “other” State Snapshot Transfer port 4444 is always listening on the catch-all IP address “0.0.0.0” which accepts connections on any network interface and local address. I’ve observed this while a node was in a “Donor” state because another node was just joining the cluster.