After receiving the announcement done by Oracle via Lefred, I got myself very curious about the new MySQL InnoDB Cluster. After watching the video, I downloaded the package, got the online manual and started playing with it. My first impressions was that it has the simplicity of the MongoDB Shell, but it more resilience because it is a master-master cluster, having a node assuming the PRIMARY role when a existing one should crash. It’s really good to have something very simple like this on the MySQL World because IMHO, all we have until now requires some time to setup and have running – KISS is a very good idea and MySQL InnoDB Cluster, I see that it was created to be simple to setup, congrats for that Oracle!
After Downloading Packages…
After getting the packages on a vagrant VM, is just untar it and then, I saw that the package is made by three other main packages:
[root@box01 ~]# wget http://downloads.mysql.com/snapshots/pb/mysql-innodb-cluster-5.7.15-preview/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz --2016-09-21 00:25:52-- http://downloads.mysql.com/snapshots/pb/mysql-innodb-cluster-5.7.15-preview/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz Resolving downloads.mysql.com (downloads.mysql.com)... 137.254.60.14 Connecting to downloads.mysql.com (downloads.mysql.com)|137.254.60.14|:80... connected. HTTP request sent, awaiting response... 200 OK |
[root@box01 ~]# ls -lh total 1.1G -rw-r--r-- 1 7155 31415 490M Sep 16 10:14 mysql-5.7.15-labs-gr090-el7-x86_64.rpm-bundle.tar -rw-r--r-- 1 root root 536M Sep 16 10:18 mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz -rw-r--r-- 1 7155 31415 4.5M Sep 16 10:14 mysql-router-2.1.0-0.1-labs-el7-x86_64.rpm-bundle.tar -rw-r--r-- 1 7155 31415 44M Sep 16 10:14 mysql-shell-1.0.5-0.1-labs-el7-x86_64.rpm-bundle.tar |
Yeah, all packages after tar zvxf has 1.1G size! It’s cool as this comprised by all 5.7 MySQL Server packages, the MySQL Router and the MySQL Shell.
[root@box01 ~]# ls -lhR .: total 1.1G -rw-------. 1 root root 1.4K Jul 16 2015 anaconda-ks.cfg -rw-r--r-- 1 7155 31415 490M Sep 16 10:14 mysql-5.7.15-labs-gr090-el7-x86_64.rpm-bundle.tar -rw-r--r-- 1 root root 536M Sep 16 10:18 mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz -rw-r--r-- 1 7155 31415 4.5M Sep 16 10:14 mysql-router-2.1.0-0.1-labs-el7-x86_64.rpm-bundle.tar -rw-r--r-- 1 7155 31415 44M Sep 16 10:14 mysql-shell-1.0.5-0.1-labs-el7-x86_64.rpm-bundle.tar drwxr-xr-x 2 root root 4.0K Sep 21 01:32 rpms ./rpms: total 538M -rw-r--r-- 1 7155 31415 24M Sep 15 11:01 mysql-community-client-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 272K Sep 15 11:01 mysql-community-common-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 3.6M Sep 15 11:01 mysql-community-devel-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 44M Sep 15 11:01 mysql-community-embedded-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 23M Sep 15 11:01 mysql-community-embedded-compat-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 120M Sep 15 11:01 mysql-community-embedded-devel-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 2.2M Sep 15 11:02 mysql-community-libs-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 2.1M Sep 15 11:02 mysql-community-libs-compat-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 161M Sep 15 11:02 mysql-community-server-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 112M Sep 15 11:02 mysql-community-test-5.7.15-1.labs_gr090.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 1.2M Sep 16 09:43 mysql-router-2.1.0-0.1.labs.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 3.3M Sep 16 09:43 mysql-router-debuginfo-2.1.0-0.1.labs.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 4.2M Sep 16 09:43 mysql-shell-1.0.5-0.1.labs.el7.x86_64.rpm -rw-r--r-- 1 7155 31415 40M Sep 16 09:43 mysql-shell-debuginfo-1.0.5-0.1.labs.el7.x86_64.rpm |
So, let’s get this installed, I recommend you to use yum to resolve dependencies.
[root@box01 ~]# yum -y install *.rpm [...snip...] Installed: mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-embedded.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-embedded-compat.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-embedded-devel.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-community-test.x86_64 0:5.7.15-1.labs_gr090.el7 mysql-router.x86_64 0:2.1.0-0.1.labs.el7 mysql-router-debuginfo.x86_64 0:2.1.0-0.1.labs.el7 mysql-shell.x86_64 0:1.0.5-0.1.labs.el7 mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7 Dependency Installed: perl-Data-Dumper.x86_64 0:2.145-3.el7 Replaced: mariadb-libs.x86_64 1:5.5.41-2.el7_0 |
Now it’s time to start the a MySQL InnoDB Cluster! From now on, make sure you’re using a user different than root!
First step, start MySQL 5.7 and change the root password as we do for a normal MySQL instance:
[wb@box01 rpms]# systemctl start mysqld.service [wb@box01 rpms]# cat /var/log/mysqld.log | grep temp 2016-09-20T23:45:06.950465Z 1 [Note] A temporary password is generated for root@localhost: agaUf8YrhQ!R 2016-09-20T23:45:10.198806Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables [wb@box01 rpms]# mysql -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 2 Server version: 5.7.15-labs-gr090 Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> alter user root@localhost identified by 'P@ssw0rd'; Query OK, 0 rows affected (0.00 sec) mysql> \q Bye |
At this point, if you tried to create an instance, for example on port 3310 before it has failed, the directory /root/mysql-sandboxes/3310 won’t be empty the an error will be raised if you try that again. Make sure you have that directory clean to create this instance again:
Please enter a MySQL root password for the new instance: Deploying new MySQL instance... ERROR: Error executing the 'sandbox create' command: The sandbox dir '/root/mysql-sandboxes/3310' is not empty. |
So, having the root P@ssw0rd for MySQL 5.7 and having it running right now, let’s deploy the instances that will be added soon to our InnoDB Cluster. Below I added 5 instances:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | mysql-js> dba.deployLocalInstance(3310) A new MySQL sandbox instance will be created on this host in /home/wb/mysql-sandboxes/3310 Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3310 successfully deployed and started. Use '\connect root@localhost:3310' to connect to the instance. mysql-js> dba.deployLocalInstance(3311) A new MySQL sandbox instance will be created on this host in /home/wb/mysql-sandboxes/3311 Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3311 successfully deployed and started. Use '\connect root@localhost:3311' to connect to the instance. mysql-js> dba.deployLocalInstance(3312) A new MySQL sandbox instance will be created on this host in /home/wb/mysql-sandboxes/3312 Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3312 successfully deployed and started. Use '\connect root@localhost:3312' to connect to the instance. mysql-js> dba.deployLocalInstance(3313) A new MySQL sandbox instance will be created on this host in /home/wb/mysql-sandboxes/3313 Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3313 successfully deployed and started. Use '\connect root@localhost:3313' to connect to the instance. mysql-js> dba.deployLocalInstance(3314) A new MySQL sandbox instance will be created on this host in /home/wb/mysql-sandboxes/3314 Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3314 successfully deployed and started. Use '\connect root@localhost:3314' to connect to the instance. |
As the manual says, the nest step is to initialize the cluster, after connecting to on of the instances we created previously and we can choose any of the instances to use as a point to initialize the cluster:
mysql-js> \connect root@localhost:3310 Creating a Session to 'root@localhost:3310' Enter password: Classic Session successfully established. No default schema selected. mysql-js> cluster = dba.createCluster('wbCluster001') A new InnoDB cluster will be created on instance 'root@localhost:3310'. When setting up a new InnoDB cluster it is required to define an administrative MASTER key for the cluster. This MASTER key needs to be re-entered when making changes to the cluster later on, e.g.adding new MySQL instances or configuring MySQL Routers. Losing this MASTER key will require the configuration of all InnoDB cluster entities to be changed. Please specify an administrative MASTER key for the cluster 'wbCluster001': Creating InnoDB cluster 'wbCluster001' on 'root@localhost:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. mysql-js> |
A MASTER key is required to create the cluster, make sure the value you inform as a MASTER key is well protected and you don’t lose it – it’s a important thing for the InnoDB Cluster management.
So, our MySQL InnoDB Cluster is created, Voilà!
The next step is to add the instances, now replicas, to the existing MySQL InnoDB Cluster which is wbCluster001.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | mysql-js> cluster.addInstance('root@localhost:3311') A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Please provide the password for 'root@localhost:3311': Adding instance to the cluster ... The instance 'root@localhost:3311' was successfully added to the cluster. mysql-js> cluster.addInstance('root@localhost:3312') A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Please provide the password for 'root@localhost:3312': Adding instance to the cluster ... The instance 'root@localhost:3312' was successfully added to the cluster. mysql-js> cluster.addInstance('root@localhost:3313') A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Please provide the password for 'root@localhost:3313': Adding instance to the cluster ... The instance 'root@localhost:3313' was successfully added to the cluster. mysql-js> cluster.addInstance('root@localhost:3314') A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Please provide the password for 'root@localhost:3314': Adding instance to the cluster ... The instance 'root@localhost:3314' was successfully added to the cluster. |
Finally, we can check the whole cluster:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | mysql-js> cluster.status() { "clusterName": "wbCluster001", "defaultReplicaSet": { "status": "Cluster tolerant to up to 3 failures.", "topology": { "localhost:3310": { "address": "localhost:3310", "status": "ONLINE", "role": "HA", "mode": "R/W", "leaves": { "localhost:3311": { "address": "localhost:3311", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3312": { "address": "localhost:3312", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3313": { "address": "localhost:3313", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3314": { "address": "localhost:3314", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} } } } } } } |
Beautiful!! All nodes reporting the status ONLINE when it could be reporting OFFLINE or RECOVERING when it’s receiving updates, catching up with the cluster’s state, as when we add a new node to an existing cluster. Additionally, just the bootstrapped node is in R/W mode at this point and the others are in R/O. That means that the solutions was designed to support writes in one node, that is considered as PRIMARY and the others are considered SECONDARIES. When the current primary goes down, one of the secondaries will assume the role.
At this point we can check another things regrading the MySQL InnoDB Cluster.
#: local instances metadata [wb@box01 ~]$ ls -lh ~/mysql-sandboxes/ insgesamt 24K drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:00 3310 drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3311 drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3312 drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3313 drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:03 3314 drwxrwxr-x 4 wb wb 4,0K 22. Sep 01:03 3315 #: sockets open [wb@box01 ~]$ netstat -na | grep sand unix 2 [ ACC ] STREAM HÖRT 25608 /home/wb/mysql-sandboxes/3315/mysqlx.sock unix 2 [ ACC ] STREAM HÖRT 25613 /home/wb/mysql-sandboxes/3315/mysqld.sock unix 2 [ ACC ] STREAM HÖRT 25386 /home/wb/mysql-sandboxes/3313/mysqlx.sock unix 2 [ ACC ] STREAM HÖRT 25391 /home/wb/mysql-sandboxes/3313/mysqld.sock unix 2 [ ACC ] STREAM HÖRT 25275 /home/wb/mysql-sandboxes/3312/mysqlx.sock unix 2 [ ACC ] STREAM HÖRT 25280 /home/wb/mysql-sandboxes/3312/mysqld.sock unix 2 [ ACC ] STREAM HÖRT 24903 /home/wb/mysql-sandboxes/3310/mysqlx.sock unix 2 [ ACC ] STREAM HÖRT 24908 /home/wb/mysql-sandboxes/3310/mysqld.sock unix 2 [ ACC ] STREAM HÖRT 25166 /home/wb/mysql-sandboxes/3311/mysqlx.sock unix 2 [ ACC ] STREAM HÖRT 25171 /home/wb/mysql-sandboxes/3311/mysqld.sock unix 2 [ ACC ] STREAM HÖRT 25497 /home/wb/mysql-sandboxes/3314/mysqlx.sock unix 2 [ ACC ] STREAM HÖRT 25502 /home/wb/mysql-sandboxes/3314/mysqld.sock |
If you disconnected from mysqlsh and would like to get back connected with your created cluster, you need to access the instance you used to create the seed and then use the dba.getCluster() in order to set a variable with the name of the cluster you want to check and then, use the cluster.status again, as below:
mysql-js> \connect root@localhost:3310 Creating a Session to 'root@localhost:3310' Enter password: Classic Session successfully established. No default schema selected. mysql-js> cluster = dba.getCluster() When the InnoDB cluster was setup, a MASTER key was defined in order to enable performing administrative tasks on the cluster. Please specify the administrative MASTER key for the default cluster: <Cluster:wbCluster001> |
And the cluster.status()
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 | mysql-js> cluster.status() { "clusterName": "wbCluster001", "defaultReplicaSet": { "status": "Cluster tolerant to up to 4 failures.", "topology": { "localhost:3310": { "address": "localhost:3310", "status": "ONLINE", "role": "HA", "mode": "R/W", "leaves": { "localhost:3311": { "address": "localhost:3311", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3312": { "address": "localhost:3312", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3313": { "address": "localhost:3313", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3314": { "address": "localhost:3314", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} }, "localhost:3315": { "address": "localhost:3315", "status": "ONLINE", "role": "HA", "mode": "R/O", "leaves": {} } } } } } } mysql-js> \q Bye! |
More resources:
- Docs: https://dev.mysql.com/doc/mysql-innodb-cluster/en/