wagnerbianchi.com

Testing the New MySQL InnoDB Cluster

setembro 20th, 2016 Bianchi Posted in MySQL HA No Comments »

cluster-serverAfter receiving the announcement done by Oracle via Lefred, I got myself very curious about the new MySQL InnoDB Cluster. After watching the video, I downloaded the package, got the online manual and started playing with it. My first impressions was that it has the simplicity of the MongoDB Shell, but it more resilience because it is a master-master cluster, having a node assuming the PRIMARY role when a existing one should crash. It’s really good to have something very simple like this on the MySQL World because IMHO, all we have until now requires some time to setup and have running – KISS is a very good idea and MySQL InnoDB Cluster, I see that it was created to be simple to setup, congrats for that Oracle!

After Downloading Packages…

After getting the packages on a vagrant VM, is just untar it and then, I saw that the package is made by three other main packages:

[root@box01 ~]# wget http://downloads.mysql.com/snapshots/pb/mysql-innodb-cluster-5.7.15-preview/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz
--2016-09-21 00:25:52-- http://downloads.mysql.com/snapshots/pb/mysql-innodb-cluster-5.7.15-preview/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz
Resolving downloads.mysql.com (downloads.mysql.com)... 137.254.60.14
Connecting to downloads.mysql.com (downloads.mysql.com)|137.254.60.14|:80... connected.
HTTP request sent, awaiting response... 200 OK
[root@box01 ~]# ls -lh
total 1.1G
-rw-r--r-- 1 7155 31415 490M Sep 16 10:14 mysql-5.7.15-labs-gr090-el7-x86_64.rpm-bundle.tar
-rw-r--r-- 1 root root 536M Sep 16 10:18 mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz
-rw-r--r-- 1 7155 31415 4.5M Sep 16 10:14 mysql-router-2.1.0-0.1-labs-el7-x86_64.rpm-bundle.tar
-rw-r--r-- 1 7155 31415 44M Sep 16 10:14 mysql-shell-1.0.5-0.1-labs-el7-x86_64.rpm-bundle.tar

Yeah, all packages after tar zvxf has 1.1G size! It’s cool as this comprised by all 5.7 MySQL Server packages, the MySQL Router and the MySQL Shell.

[root@box01 ~]# ls -lhR
.:
total 1.1G
-rw-------. 1 root root 1.4K Jul 16 2015 anaconda-ks.cfg
-rw-r--r-- 1 7155 31415 490M Sep 16 10:14 mysql-5.7.15-labs-gr090-el7-x86_64.rpm-bundle.tar
-rw-r--r-- 1 root root 536M Sep 16 10:18 mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz
-rw-r--r-- 1 7155 31415 4.5M Sep 16 10:14 mysql-router-2.1.0-0.1-labs-el7-x86_64.rpm-bundle.tar
-rw-r--r-- 1 7155 31415 44M Sep 16 10:14 mysql-shell-1.0.5-0.1-labs-el7-x86_64.rpm-bundle.tar
drwxr-xr-x 2 root root 4.0K Sep 21 01:32 rpms
 
./rpms:
total 538M
-rw-r--r-- 1 7155 31415 24M Sep 15 11:01 mysql-community-client-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 272K Sep 15 11:01 mysql-community-common-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 3.6M Sep 15 11:01 mysql-community-devel-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 44M Sep 15 11:01 mysql-community-embedded-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 23M Sep 15 11:01 mysql-community-embedded-compat-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 120M Sep 15 11:01 mysql-community-embedded-devel-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 2.2M Sep 15 11:02 mysql-community-libs-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 2.1M Sep 15 11:02 mysql-community-libs-compat-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 161M Sep 15 11:02 mysql-community-server-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 112M Sep 15 11:02 mysql-community-test-5.7.15-1.labs_gr090.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 1.2M Sep 16 09:43 mysql-router-2.1.0-0.1.labs.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 3.3M Sep 16 09:43 mysql-router-debuginfo-2.1.0-0.1.labs.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 4.2M Sep 16 09:43 mysql-shell-1.0.5-0.1.labs.el7.x86_64.rpm
-rw-r--r-- 1 7155 31415 40M Sep 16 09:43 mysql-shell-debuginfo-1.0.5-0.1.labs.el7.x86_64.rpm

So, let’s get this installed, I recommend you to use yum to resolve dependencies.

[root@box01 ~]# yum -y install *.rpm
[...snip...]
Installed:
mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-embedded.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-embedded-compat.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-embedded-devel.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-community-test.x86_64 0:5.7.15-1.labs_gr090.el7
mysql-router.x86_64 0:2.1.0-0.1.labs.el7
mysql-router-debuginfo.x86_64 0:2.1.0-0.1.labs.el7
mysql-shell.x86_64 0:1.0.5-0.1.labs.el7
mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7
 
Dependency Installed:
perl-Data-Dumper.x86_64 0:2.145-3.el7
 
Replaced:
mariadb-libs.x86_64 1:5.5.41-2.el7_0

Now it’s time to start the a MySQL InnoDB Cluster! From now on, make sure you’re using a user different than root!

First step, start MySQL 5.7 and change the root password as we do for a normal MySQL instance:

[wb@box01 rpms]# systemctl start mysqld.service
[wb@box01 rpms]# cat /var/log/mysqld.log | grep temp
2016-09-20T23:45:06.950465Z 1 [Note] A temporary password is generated for root@localhost: agaUf8YrhQ!R
2016-09-20T23:45:10.198806Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
[wb@box01 rpms]# mysql -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.15-labs-gr090
 
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql> alter user root@localhost identified by 'P@ssw0rd';
Query OK, 0 rows affected (0.00 sec)
 
mysql> \q
Bye

At this point, if you tried to create an instance, for example on port 3310 before it has failed, the directory /root/mysql-sandboxes/3310 won’t be empty the an error will be raised if you try that again. Make sure you have that directory clean to create this instance again:

Please enter a MySQL root password for the new instance:
Deploying new MySQL instance...
ERROR: Error executing the 'sandbox create' command: The sandbox dir '/root/mysql-sandboxes/3310' is not empty.

So, having the root P@ssw0rd for MySQL 5.7 and having it running right now, let’s deploy the instances that will be added soon to our InnoDB Cluster. Below I added 5 instances:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
mysql-js> dba.deployLocalInstance(3310)
A new MySQL sandbox instance will be created on this host in
/home/wb/mysql-sandboxes/3310
 
Please enter a MySQL root password for the new instance:
Deploying new MySQL instance...
 
Instance localhost:3310 successfully deployed and started.
Use '\connect root@localhost:3310' to connect to the instance.
 
mysql-js> dba.deployLocalInstance(3311)
A new MySQL sandbox instance will be created on this host in
/home/wb/mysql-sandboxes/3311
 
Please enter a MySQL root password for the new instance:
Deploying new MySQL instance...
 
Instance localhost:3311 successfully deployed and started.
Use '\connect root@localhost:3311' to connect to the instance.
 
mysql-js> dba.deployLocalInstance(3312)
A new MySQL sandbox instance will be created on this host in
/home/wb/mysql-sandboxes/3312
 
Please enter a MySQL root password for the new instance:
Deploying new MySQL instance...
 
Instance localhost:3312 successfully deployed and started.
Use '\connect root@localhost:3312' to connect to the instance.
 
mysql-js> dba.deployLocalInstance(3313)
A new MySQL sandbox instance will be created on this host in
/home/wb/mysql-sandboxes/3313
 
Please enter a MySQL root password for the new instance:
Deploying new MySQL instance...
 
Instance localhost:3313 successfully deployed and started.
Use '\connect root@localhost:3313' to connect to the instance.
 
mysql-js> dba.deployLocalInstance(3314)
A new MySQL sandbox instance will be created on this host in
/home/wb/mysql-sandboxes/3314
 
Please enter a MySQL root password for the new instance:
Deploying new MySQL instance...
 
Instance localhost:3314 successfully deployed and started.
Use '\connect root@localhost:3314' to connect to the instance.

As the manual says, the nest step is to initialize the cluster, after connecting to on of the instances we created previously and we can choose any of the instances to use as a point to initialize the cluster:

mysql-js> \connect root@localhost:3310
Creating a Session to 'root@localhost:3310'
Enter password:
Classic Session successfully established. No default schema selected.
mysql-js> cluster = dba.createCluster('wbCluster001')
A new InnoDB cluster will be created on instance 'root@localhost:3310'.
 
When setting up a new InnoDB cluster it is required to define an administrative
MASTER key for the cluster. This MASTER key needs to be re-entered when making
changes to the cluster later on, e.g.adding new MySQL instances or configuring
MySQL Routers. Losing this MASTER key will require the configuration of all
InnoDB cluster entities to be changed.
 
Please specify an administrative MASTER key for the cluster 'wbCluster001':
Creating InnoDB cluster 'wbCluster001' on 'root@localhost:3310'...
Adding Seed Instance...
 
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
 
 
mysql-js>

A MASTER key is required to create the cluster, make sure the value you inform as a MASTER key is well protected and you don’t lose it – it’s a important thing for the InnoDB Cluster management.

So, our MySQL InnoDB Cluster is created, Voilà!

The next step is to add the instances, now replicas, to the existing MySQL InnoDB Cluster which is wbCluster001.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
mysql-js> cluster.addInstance('root@localhost:3311')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@localhost:3311':
Adding instance to the cluster ...
 
The instance 'root@localhost:3311' was successfully added to the cluster.
 
mysql-js> cluster.addInstance('root@localhost:3312')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@localhost:3312':
Adding instance to the cluster ...
 
The instance 'root@localhost:3312' was successfully added to the cluster.
 
mysql-js> cluster.addInstance('root@localhost:3313')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@localhost:3313':
Adding instance to the cluster ...
 
The instance 'root@localhost:3313' was successfully added to the cluster.
 
mysql-js> cluster.addInstance('root@localhost:3314')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@localhost:3314':
Adding instance to the cluster ...
 
The instance 'root@localhost:3314' was successfully added to the cluster.

Finally, we can check the whole cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
mysql-js> cluster.status()
{
    "clusterName": "wbCluster001",
    "defaultReplicaSet": {
        "status": "Cluster tolerant to up to 3 failures.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "localhost:3311": {
                        "address": "localhost:3311",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3312": {
                        "address": "localhost:3312",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3313": {
                        "address": "localhost:3313",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3314": {
                        "address": "localhost:3314",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                }
            }
        }
    }
}

Beautiful!! All nodes reporting the status ONLINE when it could be reporting OFFLINE or RECOVERING when it’s receiving updates, catching up with the cluster’s state, as when we add a new node to an existing cluster. Additionally, just the bootstrapped node is in R/W mode at this point and the others are in R/O. That means that the solutions was designed to support writes in one node, that is considered as PRIMARY and the others are considered SECONDARIES. When the current primary goes down, one of the secondaries will assume the role.

At this point we can check another things regrading the MySQL InnoDB Cluster.

#: local instances metadata
[wb@box01 ~]$ ls -lh ~/mysql-sandboxes/
insgesamt 24K
drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:00 3310
drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3311
drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3312
drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3313
drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:03 3314
drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:03 3315
 
#: sockets open
[wb@box01 ~]$ netstat -na | grep sand
unix  2      [ ACC ]     STREAM     HÖRT         25608    /home/wb/mysql-sandboxes/3315/mysqlx.sock
unix  2      [ ACC ]     STREAM     HÖRT         25613    /home/wb/mysql-sandboxes/3315/mysqld.sock
unix  2      [ ACC ]     STREAM     HÖRT         25386    /home/wb/mysql-sandboxes/3313/mysqlx.sock
unix  2      [ ACC ]     STREAM     HÖRT         25391    /home/wb/mysql-sandboxes/3313/mysqld.sock
unix  2      [ ACC ]     STREAM     HÖRT         25275    /home/wb/mysql-sandboxes/3312/mysqlx.sock
unix  2      [ ACC ]     STREAM     HÖRT         25280    /home/wb/mysql-sandboxes/3312/mysqld.sock
unix  2      [ ACC ]     STREAM     HÖRT         24903    /home/wb/mysql-sandboxes/3310/mysqlx.sock
unix  2      [ ACC ]     STREAM     HÖRT         24908    /home/wb/mysql-sandboxes/3310/mysqld.sock
unix  2      [ ACC ]     STREAM     HÖRT         25166    /home/wb/mysql-sandboxes/3311/mysqlx.sock
unix  2      [ ACC ]     STREAM     HÖRT         25171    /home/wb/mysql-sandboxes/3311/mysqld.sock
unix  2      [ ACC ]     STREAM     HÖRT         25497    /home/wb/mysql-sandboxes/3314/mysqlx.sock
unix  2      [ ACC ]     STREAM     HÖRT         25502    /home/wb/mysql-sandboxes/3314/mysqld.sock

If you disconnected from mysqlsh and would like to get back connected with your created cluster, you need to access the instance you used to create the seed and then use the dba.getCluster() in order to set a variable with the name of the cluster you want to check and then, use the cluster.status again, as below:

mysql-js> \connect root@localhost:3310
Creating a Session to 'root@localhost:3310'
Enter password:
Classic Session successfully established. No default schema selected.
mysql-js> cluster = dba.getCluster()
When the InnoDB cluster was setup, a MASTER key was defined in order to enable
performing administrative tasks on the cluster.
 
Please specify the administrative MASTER key for the default cluster:
<Cluster:wbCluster001>

And the cluster.status()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
mysql-js> cluster.status()
{
    "clusterName": "wbCluster001",
    "defaultReplicaSet": {
        "status": "Cluster tolerant to up to 4 failures.",
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "localhost:3311": {
                        "address": "localhost:3311",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3312": {
                        "address": "localhost:3312",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3313": {
                        "address": "localhost:3313",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3314": {
                        "address": "localhost:3314",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "localhost:3315": {
                        "address": "localhost:3315",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                }
            }
        }
    }
}
mysql-js> \q
Bye!

More resources:

  • Docs: https://dev.mysql.com/doc/mysql-innodb-cluster/en/

AddThis Social Bookmark Button

How to MySQL 5.6 MTS? Some tips and tests…

outubro 6th, 2014 Bianchi Posted in MySQL HA, MySQL Replication No Comments »

In terms of new features, MTS extrapolate conceptually all the expectations seeing that it is a feature that will elevate (or at leaf will try it) all the stress that we’ve been seeing for years, having a slave server behind master in some way, files or seconds. The first thing that blink on my mid is the red mark I used to have on my personal server’s dashboard control which has called out my attention always when a slave server hangs while the master still working well. Many of time, I’ve seeing IO problems that will make slave hangs within the relay log rotation time or when executing a long report – at least four times a day, I must go and check what’s going on. BTW, due to all of that I believe that the exchange the model of single-thread for that with multiples threads will elevate the problem – I hope to have the slave servers not hanging too much anymore.

Having said that, I’ve been seeing some cases in which after implementing the MySQL 5.6 in production, properly adjusting the worker threads variable (slave_parallel_wrokers) and starting the slave, not often it’s being worked as promised. The first action when it’s not working properly is to check all the necessary configuration to get it working well, it’s not just raise up the previous mentioned and put it to run – you make sure about some small details:

  • make sure the master and slaves are MySQL 5.6++, there some information shipped with binary log events which will be read by the slaves, being executed by more than once thread (slave_parallel_threads > 1);
  • make sure slave_parallel_threads is configured with a value > 1;
  • make sure to enable Crash-Safe Replication to make it more reliable, adding master_info_repository = TABLE, relay_log_info_repository = TABLE and relay_log_recovery = ON to my.cnf file;
  • it’s going to work with binlog_format as statement or row;

There is no problem in using the bin log format as statement or row, both formats will work well since on both one can observe the entries on binary log file of all the necessary information for the workers. Another advice is that, once you’ve started MySQL replicating in a crash-safe mode, it’s not possible to alter dynamically the repositories for relay and master info on runtime for a busy server due to the workers info being stored on tables and if you change this information’s location, workers might be a little bit out of reference.

First of all, I did a test considering a customer environment where they’re running a MySQL 5.5.37 on all the master and slave servers. the strategy is to replace slave servers until we hit the master, doing finally a switchover to another server in order to upgrade the master server to MySQL 5.6 – a normal and reasonable strategy to avoid risks on production. If you get a error on any project step, time will be welcomed to study a little bit more what’s happening and then, take another step towards another task. Another point to take into account is the old_passwords (removed in 5.7) that is still available in 5.6 and must be used when users in mysql.user table remain using 16-bytes passwords. It’s a good maneuver while updating all the account’s passwords to keep old_passwords=1 configuration to give sometime to map all the accounts used by the applications to avoid problems with access denied (while taking with account’s passwords updates, configure log_warnings=2 to get all the login failed attempts and try to correct it).

Test Scenario: confirming that mysql 5.5 (master) and mysql 5.6 (slave) does not replicate with mts

As this is the scenario I’ve been see as much on many of customers I’ve visited, I decided to spend sometime and stress the possibilities around the MTS replication having a 5.5 master and a new comer, mysql 5.6 as slave. Some discussions on the internet made me believe that at some level of configuration this scenario will become possible – but, it’s not supported. After speaking to guys on MySQL Central, we discussed a lot many scenarios but some high level guys known as developers said that 5.6 used to ship some additional information on binary logs and then, the mts on slave get to know how to proceed splitting up queries (binlog_format=statement) or updates (binlog_format=row) by workers (threads) – this job is done actually by a coordinator that is a thread as well that executes stuff read from the relay logs on slave side. BTW, this all I’ve got to know after testing the environment which I raised up using Vagrant as below.

Files you’ll need to create/copy/paste – make sure you have a centos65-x86_64 box added on your vagrant boxes or alter the value of mysql55.vm.box and mysql56.vm.box in Vagrantfile configs.

wagnerbianchi02:mysql55_and_mysql56 root# ls -lh
total 24
drwxr-xr-x 3 root wheel 102B Oct 6 12:53 .vagrant
-rw-r--r-- 1 root wheel 760B Oct 6 12:53 Vagrantfile
-rw-r--r-- 1 root wheel 681B Oct 6 12:52 setup_env55.sh
-rw-r--r-- 1 root wheel 343B Oct 6 12:42 setup_env56.sh

Vagrantfile, which you can just copy/paste:

# -*- mode: ruby -*-
# vi: set ft=ruby :

file_to_disk = './tmp/galera_disk.vdi'

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

config.vm.define "mysql55" do |mysql55|
mysql55.vm.box = "centos65-x86_64"
mysql55.vm.network :"private_network", virtualbox__intnet: "mysql55_and_mysql56", ip: "192.168.50.100"
mysql55.vm.provision "shell", path: "setup_env55.sh"
end

config.vm.define "mysql56" do |mysql56|
mysql56.vm.box = "centos65-x86_64"
mysql56.vm.network :"private_network", virtualbox__intnet: "mysql55_and_mysql56", ip: "192.168.50.101"
mysql56.vm.provision "shell", path: "setup_env56.sh"
end

end

If you get an error like this below, review the boxes you’ve got added to your Vagrant boxes:

wagnerbianchi01:mysql55_and_mysql56 root# vagrant up
Bringing machine 'mysql55' up with 'virtualbox' provider...
Bringing machine 'mysql56' up with 'virtualbox' provider...
==> mysql55: Box 'centos65-x86_64' could not be found. Attempting to find and install...
mysql55: Box Provider: virtualbox
mysql55: Box Version: >= 0
==> mysql55: Adding box 'centos65-x86_64' (v0) for provider: virtualbox
mysql55: Downloading: centos65-x86_64
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

Couldn't open file /opt/vagrant_projects/mysql55_and_mysql56/centos65-x86_64

Setup scripts that you can use to create others machines:

#!/usr/bin/env bash
#: script name: setup_env55.sh
#
sudo echo "nameserver 8.8.8.8" > /etc/resolv.conf
sudo echo "nameserver 8.8.4.4" >> /etc/resolv.conf
sudo yum -y install wget vim
sudo yum -y remove mysql-libs-5.1.71-1.el6.x86_64
sudo rpm -Uvi https://dev.mysql.com/get/mysql-community-release-el6-5.noarch.rpm
sudo wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-5.5.39-2.el6.x86_64.rpm-bundle.tar
sudo tar xvf MySQL-5.5.39-2.el6.x86_64.rpm-bundle.tar
sudo rpm -ivh install MySQL-{server,shared,client}*
sudo /etc/init.d/mysql start

#!/usr/bin/env bash
#: script name: setup_env56.sh
#
sudo echo "nameserver 8.8.8.8" > /etc/resolv.conf
sudo echo "nameserver 8.8.4.4" >> /etc/resolv.conf
sudo yum -y install wget vim
sudo rpm -Uvi https://dev.mysql.com/get/mysql-community-release-el6-5.noarch.rpm
sudo yum-config-manager --enable mysql56-community
sudo yum -y install mysql-server
sudo service mysqld start

Following this so is the command vagrant up and machines up & running.

wagnerbianchi01:mysql55_and_mysql56 root# vagrant status
Current machine states:

mysql55 running (virtualbox)
mysql56 running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

Setting up additional things like classic or GTID replication (where once can do using MySQL Utilities), it’s going to be possible to execute the tests. In addition of regular and initial variables the come mainly with the 5.5 configuration file, on 5.6 I added the server_id, server_id=200, slave_worker_threads = 2, master_info_repository = TABLE, relay_log_info_repository = TABLE and relay_log_recovery = ON, for Crash-Safe Replication configurations as the 5.6 will be the slave.

mysql> mysql> select @@server_id, @@slave_parallel_workers, @@master_info_repository,
    -> @@relay_log_info_repository, @@master_info_repository, @@relay_log_recovery\G
*************************** 1. row ***************************
                @@server_id: 200
   @@slave_parallel_workers: 2
   @@master_info_repository: TABLE
@@relay_log_info_repository: TABLE
   @@master_info_repository: TABLE
       @@relay_log_recovery: 1
1 row in set (0.00 sec)

Checking the replication status:

           Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

Now, the test proposed on the Luis’ blog sometime ago to explain MTS is, on the master server:

mysql> create database db1; create database db2;
Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

On slave side, check the content of the table mysql.slave_worker_info – this table will register all the movement around the MTS replication. Below you can see that, as we haven’t executed anything directly on databases yet, threads haven’t worked yet.

mysql> select * from mysql.slave_worker_info\G
*************************** 1. row ***************************
                        Id: 1
            Relay_log_name:
             Relay_log_pos: 0
           Master_log_name:
            Master_log_pos: 0
 Checkpoint_relay_log_name:
  Checkpoint_relay_log_pos: 0
Checkpoint_master_log_name:
 Checkpoint_master_log_pos: 0
          Checkpoint_seqno: 0
     Checkpoint_group_size: 64
   Checkpoint_group_bitmap:
*************************** 2. row ***************************
                        Id: 2
            Relay_log_name:
             Relay_log_pos: 0
           Master_log_name:
            Master_log_pos: 0
 Checkpoint_relay_log_name:
  Checkpoint_relay_log_pos: 0
Checkpoint_master_log_name:
 Checkpoint_master_log_pos: 0
          Checkpoint_seqno: 0
     Checkpoint_group_size: 64
   Checkpoint_group_bitmap:
2 rows in set (0.00 sec)

Coming back on the master, enter some inserts:

mysql> create table db1.t1(a int); create table db2.t1(a int);
Query OK, 0 rows affected (0.01 sec)

Query OK, 0 rows affected (0.00 sec)

mysql> insert into db1.t1 values (1),(2),(3); insert into db2.t1 values (1),(2),(3);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

And then, we again go over mysql.slave_worker_info to check if those two listed threads have worked or not:

mysql> select * from mysql.slave_worker_info\G
*************************** 1. row ***************************
                        Id: 1
            Relay_log_name: ./mysqld-relay-bin.000002
             Relay_log_pos: 1171
           Master_log_name: master-bin.000001
            Master_log_pos: 1007
 Checkpoint_relay_log_name: ./mysqld-relay-bin.000002
  Checkpoint_relay_log_pos: 797
Checkpoint_master_log_name: master-bin.000001
 Checkpoint_master_log_pos: 633
          Checkpoint_seqno: 1
     Checkpoint_group_size: 64
   Checkpoint_group_bitmap:
*************************** 2. row ***************************
                        Id: 2
            Relay_log_name:
             Relay_log_pos: 0
           Master_log_name:
            Master_log_pos: 0
 Checkpoint_relay_log_name:
  Checkpoint_relay_log_pos: 0
Checkpoint_master_log_name:
 Checkpoint_master_log_pos: 0
          Checkpoint_seqno: 0
     Checkpoint_group_size: 64
   Checkpoint_group_bitmap:
2 rows in set (0.00 sec)

Just one thread working!! Yes, we confirmed that this does not work when you have a master on 5.5 and slaves on 5.6, regardless of the binlog_format and bla bla bla. A good point for a discussion at this point is that MySQL 5.6 has received lots of improvements regarding its engine, many other related with InnoDB and many others on many other things. Maybe it’s a good time to start upgrading 5.5 to 5.6 on slaves until we hit the master and then, upgrade all the database machines, even having MySQL 5.6 MTS disable for this moment.

Test Scenario: confirming that mysql 5.6 (master) and mysql 5.6 (slave) replicates with mts, even with binlog_format=statement

To make this new test, we just need to remove the 5.5’s RPM packages and add 5.6 from the repository and then, start slave. The final step is to execute the tests again and check the mysql.slave_worker_info table’s content on slave server.

[root@mysql55 ~]# service mysql stop
Shutting down MySQL. SUCCESS!
[root@mysql55 ~]# rpm -e MySQL-shared-5.5.39-2.el6.x86_64 MySQL-client-5.5.39-2.el6.x86_64 \
> MySQL-shared-compat-5.5.39-2.el6.x86_64 MySQL-server-5.5.39-2.el6.x86_64 \
> MySQL-shared-compat-5.5.39-2.el6.x86_64
[root@mysql55 ~]# yum -y install mysql-server
[...]
Setting up Install Process
[...]
Installed size: 329 M
Downloading Packages:
[...]
Complete!

With the 5.6 on master side, next step is to add to the my.cnf the thread_stack=256K to avoid this reported misconfiguration. After it, it time to put those two configured worker threads to work…

On master:

[root@mysql55 ~]# service mysqld start
Starting mysqld:                                           [  OK  ]
[root@mysql55 ~]# mysql -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.21-log MySQL Community Server (GPL)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> insert into db1.t1 values (1),(2),(3); insert into db2.t1 values (1),(2),(3);
Query OK, 3 rows affected (0.01 sec)
Records: 3  Duplicates: 0  Warnings: 0

Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0

Checking the worker threads on slave:

mysql> select * from mysql.slave_worker_info\G
*************************** 1. row ***************************
                        Id: 1
            Relay_log_name: ./mysqld-relay-bin.000004
             Relay_log_pos: 976
           Master_log_name: master-bin.000002
            Master_log_pos: 816
 Checkpoint_relay_log_name: ./mysqld-relay-bin.000004
  Checkpoint_relay_log_pos: 554
Checkpoint_master_log_name: master-bin.000002
 Checkpoint_master_log_pos: 394
          Checkpoint_seqno: 1
     Checkpoint_group_size: 64
   Checkpoint_group_bitmap:
*************************** 2. row ***************************
                        Id: 2
            Relay_log_name: ./mysqld-relay-bin.000004
             Relay_log_pos: 765
           Master_log_name: master-bin.000002
            Master_log_pos: 605
 Checkpoint_relay_log_name: ./mysqld-relay-bin.000004
  Checkpoint_relay_log_pos: 554
Checkpoint_master_log_name: master-bin.000002
 Checkpoint_master_log_pos: 394
          Checkpoint_seqno: 0
     Checkpoint_group_size: 64
   Checkpoint_group_bitmap:
2 rows in set (0.00 sec)

Yes, it’s working and confirmed that MTS is a feature present just on 5.6, using ROW or STATEMENT as binlog_format. BTW, i like to blog things considering all the small details, because, as our MySQL friend said on MySQL Central @ OOW14, “do not underestimate the importance of the small things“.

All the best,

AddThis Social Bookmark Button

Starting with MySQL Cluster

novembro 8th, 2011 Bianchi Posted in MySQL HA No Comments »

MySQL Cluster originates from a product called Network DataBase from Ericson Telco, located on Swedish.  Ericson’s intention was to have a database running on network as a service. Today’s MySQL Cluster have its name NDB due to the original technology name. For example, the name of the Storage Engine used connected to MySQL Server is NDB (instead of InnoDB or MyISAM). IN general, whether you see NDBCLUSTER, you can think of it as “MySQL Cluster”.

Usually we have many parts involved in a cluster technology and in MySQL Cluster it isn’t different. MySQL Cluster implements three node’s kind to achieve the objective to have no SPOF (Single Point Of Failure) and eliminate downtime possibilities in order to have data available for more time. Besides, the automatic fail over processes happens on a regular basis when some of nodes crash in the middle of operations. Well, there are:

  • Management Node: this node have its function to serve environment as a management. The client program called ndb_mgm connects with its daemon called ndb_mgmd and can be used to retrieve information about other connected cluster nodes and execute some services as cluster backup, for example;
  • Storage Data (or Data Node): connected with management, these nodes are the cluster’s storage that will be used to retrieve data from databases. I strongly recommend start a new MySQL Cluster with at least two data nodes (four is better);
  • API Node (or SQL Node): this node is responsible for receive all external interaction (SQL commands) and manipulates data on storage nodes.

After to understand better what is each presented cluster node, we need to know how to start a configuration of a simple MySQL Cluster, what is the hottest topic of this article. Due to MySQL Cluster has its architecture shared-nothing based, e.g., each component has its own hardware and structure, we must initiate this “simple” project using some virtual machine software (I am using Oracle VirtualBox) to create at least five machines which will have name’s node at hostname, firewalls disabled, SELinux disabled and a static network IP. I am using CentOS 5.5 as operating system and MySQL Cluster 7.1.9.

The first step is configure out the node1 what I set up as Management Node, the node what will serve to retrieve all information about all other cluster nodes and execute services as a backup, start and restart nodes. During its machine operating system installment, you must ensure that all firewalls and SELinux were disabled (MySQL Cluster have problems with firewalls because they need use a couple of it). Configure OS static IP and download MySQL-cluster-gpl-management-xxxxx-rhel-x86-64.rpm and MySQL-cluster-gpl-tools-xxxxx-rhel-x86-64.rpm. After it, we’ll log on linux terminal as a root, create new directory named mysql-cluster under /usr/local and move files from Download directory (I am using firefox with default configuration) to /usr/local/mysql-cluster. We need to create the DataDir where will be found ndb_mgmd log files.

[ root@node1 ~ ] mkdir -p /usr/local/mysql-cluster
[ root@node1 ~ ] mkdir -p /var/lib/mysql-cluster
[ root@node1 ~ ] mv /home/bianchi/Desktop/Downloads/MySQL-* /usr/local/mysql-cluster
[ root@node1 ~ ] cd /usr/local/mysql-cluster

After to download right MySQL Cluster files to apply on Management Node, we need to create the MySQL Cluster configuration file (using your preferred text editor). This file will be located at /usr/local/mysql-cluster/ and its name will be config.ini.

[ root@node1 mysql-cluster ] pwd
/usr/local/mysql-cluster
[ root@node1 mysql-cluster ] vim config.ini

# Into this file we’ll put all configurations required to all nodes. Look for a comments that I’ll use # into this file (commands are used after # signal).

[ndb_mgmd]
#
# Configurations used to control ndb_mgmd behavior
#
NodeId=1
HostName=192.168.0.101
DataDir=/var/lib/mysql-cluster

[ndb_default]
#
# Configurations that will be inherited for all storage/data node
#
DataDir=/var/lib/mysql-cluster
NoOfReplicas=2

[ndb]
# registering new storage node
NodeId=3
HostName=192.168.0.102

[ndb]
# registering new storage node
NodeId=4
HostName=192.168.0.103

[mysqld]
# registering new API/SQL node
NodeId=11
HostName=192.168.0.104

 

[mysqld]
# registering new API/SQL node
NodeId=12
HostName=192.168.0.105

Save and close config.ini file. This file contain configurations to start up a MySQL Cluster with 1 management node, 2 storage nodes and 2 SQL nodes. Now, we will proceed with the management node software installation. As we are working with rpm package, we need only a command to install all MySQL packages located at /usr/local/mysql-cluster.

[ root@node1 mysql-cluster ] rpm -ivh MySQL-*
Preparing... #################################### [100%]
1:MySQL-Cluster-gpl-management ################## [100%]
2:MySQL-Cluster-gpl-tools ####################### [100%]
[ root@node1 mysql-cluster ]

After install, start ndb_mgmd indicating previous crated config.ini file:

[ root@node1 mysql-cluster ] pwd
/usr/local/mysql-cluster
[ root@node1 mysql-cluster ] ndb_mgmd --config-file=config.ini
MySQL Cluster Management Server mysql-5.1.51 ndb-7.1.9
[ root@node1 mysql-cluster ]

We can use $? shell variable to check if some error was rose when we started ndb_mgmd:

[ root@node1 mysql-cluster ] echo $?
0
[ root@node1 mysql-cluster ]

Shell $? variable can return the possible errors below:

0 -> no errors during last script execution, e.g., execution success
1 -> an unknown error occurred
2 -> an command into script error was detected
127 -> some nonexistent command was entered and rose an error

As you can see, our Management Node is up & running at this time and we can issue some commands to list all cluster’s node members. Type just ndb_mgm to use this client to connect with ndb_mgmd and retrieve information, as you see below:

[ root@node1 ~ ] # ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 (not connected, accepting connect from 192.168.0.101)
id=4 (not connected, accepting connect from 192.168.0.102)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.100 (mysql-5.1.51 ndb-7.1.9)

 

[mysqld(API)] 4 node(s)
id=11 (not connected, accepting connect from 192.168.0.103)
id=12 (not connected, accepting connect from 192.168.0.104)

Naturally, as you can see above, we don’t have any nodes connected but management node. Now we have completed the first part of five job’s parts. Next I’ll register here how to configure Storage/Data Nodes to start it connecting the Management Node.

See you.

AddThis Social Bookmark Button

MySQL Cluster Storage Nodes

novembro 8th, 2011 Bianchi Posted in MySQL HA No Comments »

Going on with our MySQL Cluster studies, now I am starting this new post to talk or write about the Storage or Data Nodes, that one is responsible to storage all cluster data. When we start a cluster (as you have read at MySQL Cluster First Topics), the first step is to start the Management Node and the second part is to start the storage nodes, responsible to storage data using NDBCLUSTER Storage Engine. Normally, the correct, stated and desired form to have nodes to support storage node is to concept a separate machine in order to have only ndbd process using that machine resources. It is important cause, if data node or its daemon named ndbd do not have enough memory to maintain at least indexes on memory, it will crash and will not function properly. But, we will treat about this little things most ahead in order to introduce first steps to concept a data node, start and be happy with you cluster!

After to chose what machines or boxes will be the cluster storage nodes, paying attention about the configurations as the same model of CPU, same amount of memory and the most important part, concept a full 64-bit machine, including OS, hardware and softwares, we can start to download MySQL Cluster Storage Node software component in order to install correct packages and configure it to connect with Management node. Remember, all hardware involved must be the same configuration in order to avoid performance problems and keep the cluster simple as much as you can (normally, I have been implementing MySQL Cluster using Virtual Machines in order to have the max proximity of hardware configuration – the problem is, we must have a look on SPOFS, or, single point of failure). To build Storage Nodes, it will be required to download two packages:

  • MySQL-Cluster-gpl-tools-7.x.x-x
  • MySQL-Cluster-gpl-storage-7.x.x-x

As I am using CentOS 5.5 to write this post, I have downloaded “.rpm” packages that will be installed using rpm package manager at terminal linux level. You can apply this post on MS Windows, for example and install execs packages as you want. Below, I will demonstrate the install process:

[ root@node3 ]# rpm -ivh MySQL-*
Preparing... ############################################ [100%]
1:MySQL-Cluster-gpl-tools ################################## [100%]
2:MySQL-Cluster-gpl-stora ################################## [100%]
[ root@node1 ]#

After this, we can use the files cluster concepts, what is the local files and global files! Local files is that files that is created locally on the node’s machine and will serve to configure the cluster Storage Nodes connectstring (ndb_connectstring variable or its shortcut “-c” can be used on local files or by passing through command line). A good touch is, when you are using local files, you will able to inform just little things that will be applied on Storage Nodes connection with Management Nodes. As we have necessary components installed at this moment, we must create a configuration file that one will be read when ndbd starts (you can query where is the default location that ndbd will read local files using ndbd –help and on the command line and reading the firsts lines). The local file will be created below on the example:

[ root@node3 ]# vim /etc/my.cnf

 

# my.cnf local file
# storage node's configuration
#
[ndbd]
ndb-connectstring=192.168.1.100:1186
# finish

Before I forget, the cluster configuration global file is that just one we have created on the first post, that one the majority of the configurations were mentioned and resides on the Management Node. There, we can mention that configurations that will be applied on all Storage Nodes using the section [ndbd default].

Now, you can simply call ndbd on the command line and it will read cluster configuration local file in order to know the exact location of the Management Node (ndb_mgmd) and initiate your tests though the normal levels before appear as started on ndb_mgm Management client. Remember that the location where you can check about all nodes are running is the ndb_mgm client on Management Node (if you are using two Management Nodes – this is good, what a wonderful thing! – you can use both to retrieve all cluster status information).

Calling ndbd on the command line:

[ root@node3 ]# ndbd
2011-03-25 13:21:13 [ndbd] INFO -- Angel connected to '192.168.1.100:1186'
2011-03-25 13:21:13 [ndbd] INFO -- Angel allocated nodeid: 3

As you can see, after start the ndbd process, two processes were started together, one is the ndbd Storage Node and another is the ndbd Angel, the process tha will rise up main Storage Node process in case it going down. The started Storage Node received its NodeID as previously configured and now is waiting the other nodes to finish its complete start! All Storage Nodes envolved on the MySQL Cluster must pass through the same process – installation components, creation of configuration file mentioning the ndb_connectstring under [ndbd] section and start of ndbd. After to complete these jobs on all cluster storage nodes, go to the Management Node and query for the nodes status using ndb_mgm client as mentioned below:

[ root@node1 ~ ] # ndb_mgm -e "SHOW"
-- NDB Cluster -- Management Client --
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.1.103 (mysql-5.1.51 ndb-7.1.10, NodeGroup: 0, Master)
id=4 @192.168.1.104 (mysql-5.1.51 ndb-7.1.10, NodeGroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.100 (mysql-5.1.51 ndb-7.1.9)

[mysqld(API)] 4 node(s)
id=11 (not connected, accepting connect from 192.168.0.103)
id=12 (not connected, accepting connect from 192.168.0.104)

See ya.

AddThis Social Bookmark Button