{"id":1259,"date":"2016-09-25T10:46:15","date_gmt":"2016-09-25T13:46:15","guid":{"rendered":"http:\/\/wagnerbianchi.com\/blog\/?p=1259"},"modified":"2016-09-26T12:59:59","modified_gmt":"2016-09-26T15:59:59","slug":"mysql-innodb-cluster-now-with-remote-nodes","status":"publish","type":"post","link":"http:\/\/wagnerbianchi.com\/blog\/?p=1259","title":{"rendered":"MySQL InnoDB Cluster, now with remote nodes!"},"content":{"rendered":"<p><a href=\"http:\/\/wagnerbianchi.com\/blog\/wp-content\/uploads\/2016\/09\/Screen-Shot-2016-09-25-at-10.48.55-AM-e1474905546469.png\" rel=\"attachment wp-att-1282\"><img decoding=\"async\" loading=\"lazy\" src=\"http:\/\/wagnerbianchi.com\/blog\/wp-content\/uploads\/2016\/09\/Screen-Shot-2016-09-25-at-10.48.55-AM-e1474905546469.png\" alt=\"screen-shot-2016-09-25-at-10-48-55-am\" width=\"290\" height=\"201\" class=\"alignleft size-full wp-image-1282\" \/><\/a>In this post I\u2019m going to extend the tests I made with MySQL InnoDB Cluster on the previous post, creating a group of instances with separate servers, that is, I\u2019m going to test how to create a new cluster with three different machines considering that, if you create a cluster using one giant server, maybe it may be considered a big single point of failure in case this giant server crashes and all cluster\u2019s members crashes altogether. <\/p>\n<p>In this case, we know that, to prevent that situation is something that is part of any project using a database which principle is to scale-out in order to attend more and more data requests. This is a subject for another blog in which we can discuss the main strategies to slave writes and reads and go beyond of the scope of this current post. <\/p>\n<p>I\u2019m going to concentrate here in creating the cluster with 3 machines, I\u2019m using vagrant to create them and the following is the script that will create the virtual machines:<\/p>\n<pre lang=\"bash\" line=\"1\"># -*- mode: ruby -*-\r\n# vi: set ft=ruby :\r\n\r\nVAGRANTFILE_API_VERSION = \"2\"\r\n\r\nVagrant.configure(VAGRANTFILE_API_VERSION) do |config|\r\n  config.vm.define \"box01\" do |box01|\r\n\tbox01.vm.hostname=\"box01\"\r\n\tbox01.vm.box = \"centos7.0_x86_64\"\r\n\tbox01.vm.network \"private_network\", ip: \"192.168.50.11\", virtualbox__intnet: \"mysql_innodb_cluster\"\r\n  end\r\n\r\n  config.vm.define \"box02\" do |box02|\r\n\tbox02.vm.hostname=\"box02\"\r\n        box02.vm.box = \"centos7.0_x86_64\"\r\n        box02.vm.network \"private_network\", ip: \"192.168.50.12\", virtualbox__intnet: \"mysql_innodb_cluster\"\r\n  end\r\n\r\n  config.vm.define \"box03\" do |box03|\r\n        box03.vm.hostname=\"box03\"\r\n        box03.vm.box = \"centos7.0_x86_64\"\r\n        box03.vm.network \"private_network\", ip: \"192.168.50.13\", virtualbox__intnet: \"mysql_innodb_cluster\"\r\n  end\r\nend<\/pre>\n<p>I\u2019m considering the you have added a CentOS 7 image to your local vagrant boxes library and that you\u2019re using the VirtualBox hypervisor driver to create virtual machines. If there is something different than this on your setup, maybe the above script won\u2019t work as expected. Below, machines are running:<\/p>\n<pre lang=\"bash\">wagnerbianchi01-3:mysql_innodb_cluster01 root# vagrant status\r\nCurrent machine states:\r\nbox01                     running (virtualbox)\r\nbox02                     running (virtualbox)\r\nbox03                     running (virtualbox)<\/pre>\n<p>With that, we can start configuring the servers in order to create the cluster. Basically, the steps are like below:<\/p>\n<p><strong>1. Setup all packages on all three servers<\/strong><\/p>\n<p>On the first server, install all packages including the router one as we are going to bootstrap it on that node. You don\u2019t need to install MySQL Router package on the other two nodes as it\u2019s not needed there. MySQL Shell should be installed on all three nodes. So, below I show you what packages I installed on each of the nodes:<\/p>\n<pre lang=\"bash\">#: box01\r\n  mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-router.x86_64 0:2.1.0-0.1.labs.el7\r\n  mysql-router-debuginfo.x86_64 0:2.1.0-0.1.labs.el7\r\n  mysql-shell.x86_64 0:1.0.5-0.1.labs.el7\r\n  mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7\r\n\r\n#: box02\r\n  mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-shell.x86_64 0:1.0.5-0.1.labs.el7\r\n  mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7\r\n\r\n#: box03\r\n  mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7\r\n  mysql-shell.x86_64 0:1.0.5-0.1.labs.el7\r\n  mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7<\/pre>\n<p>To grab all these packages for your testes, click here (http:\/\/downloads.mysql.com\/snapshots\/pb\/mysql-innodb-cluster-5.7.15-preview\/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz)<\/p>\n<p><strong>2. Add the correct configs\/setting to mysql configuration file aka my.cnf:<\/strong><\/p>\n<pre lang=\"bash\">[root@box01 mysql]# cat \/etc\/my.cnf\r\n[mysqld]\r\nuser=mysql\r\ndatadir=\/var\/lib\/mysql\r\nsocket=\/var\/lib\/mysql\/mysql.sock\r\n\r\n# Disabling symbolic-links is recommended to prevent assorted security risks\r\nsymbolic-links=0\r\n\r\nlog-error=\/var\/log\/mysqld.log\r\npid-file=\/var\/run\/mysqld\/mysqld.pid\r\n\r\n#: innodb cluster configs\r\nserver_id=1\r\nbinlog_checksum=none\r\nenforce_gtid_consistency=on\r\ngtid_mode=on\r\nlog_bin\r\nlog_slave_updates\r\nmaster_info_repository=TABLE\r\nrelay_log_info_repository=TABLE\r\ntransaction_write_set_extraction=XXHASH64<\/pre>\n<p>Make sure you restart mysqld in case you add new configs after having it initialized to have above variables in effect.<\/p>\n<p><strong>3. Initialize mysqld (using the &#8211;initialize-insecure and restart service):<\/strong><\/p>\n<pre lang=\"bash\">[root@box01 ~]# mysqld --initialize-insecure\r\n[root@box01 mysql]# ls -lh\r\ninsgesamt 109M\r\n-rw-r----- 1 mysql mysql   56 24. Sep 16:23 auto.cnf\r\n-rw-r----- 1 mysql mysql  169 24. Sep 16:23 box01-bin.000001\r\n-rw-r----- 1 mysql mysql   19 24. Sep 16:23 box01-bin.index\r\n-rw-r----- 1 mysql mysql  413 24. Sep 16:23 ib_buffer_pool\r\n-rw-r----- 1 mysql mysql  12M 24. Sep 16:23 ibdata1\r\n-rw-r----- 1 mysql mysql  48M 24. Sep 16:23 ib_logfile0\r\n-rw-r----- 1 mysql mysql  48M 24. Sep 16:23 ib_logfile1\r\ndrwxr-x--- 2 mysql mysql 4,0K 24. Sep 16:23 mysql\r\ndrwxr-x--- 2 mysql mysql 8,0K 24. Sep 16:23 performance_schema\r\ndrwxr-x--- 2 mysql mysql 8,0K 24. Sep 16:23 sys\r\n[root@box01 mysql]# systemctl restart mysqld.service\r\n[root@box01 mysql]# systemctl status mysqld.service\r\nmysqld.service - MySQL Server\r\n   Loaded: loaded (\/usr\/lib\/systemd\/system\/mysqld.service; enabled)\r\n   Active: active (running) since Sa 2016-09-24 16:25:13 CEST; 6s ago\r\n  Process: 17112 ExecStart=\/usr\/sbin\/mysqld --daemonize --pid-file=\/var\/run\/mysqld\/mysqld.pid $MYSQLD_OPTS (code=exited, status=0\/SUCCESS)\r\n  Process: 17095 ExecStartPre=\/usr\/bin\/mysqld_pre_systemd (code=exited, status=0\/SUCCESS)\r\n Main PID: 17116 (mysqld)\r\n   CGroup: \/system.slice\/mysqld.service\r\n           \u2514\u250017116 \/usr\/sbin\/mysqld --daemonize --pid-file=\/var\/run\/mysqld\/mysqld.pid\r\n\r\nSep 24 16:25:12 box01 systemd[1]: Starting MySQL Server...\r\nSep 24 16:25:13 box01 systemd[1]: Started MySQL Server.<\/pre>\n<p><strong>4. Configure the password for root@\u2018%\u2019 giving the GRANT OPTIONS for this user:<\/strong><\/p>\n<p>In this step you need to work on giving the right privileges for the root@\u2018%\u2019 and configure a password for this user which will be used soon to complete the setup. In the next steps which is the verify and validate the instance, you will be prompted this root@\u2018%\u2019 password, so, follow the below steps on all three nodes:<\/p>\n<pre lang=\"mysql\">#: create and configure the root@\u2018%'\r\nmysql> grant all on *.* to root@'%' identified by 'bianchi' with grant option;\r\nQuery OK, 0 rows affected, 1 warning (0,00 sec) -- don\u2019t worry about this warning\r\n\r\n#: configure the password for root@localhost\r\nmysql> set password='bianchi';\r\nQuery OK, 0 rows affected (0,00 sec)\r\n\r\n#: in any case, flush grants tables\r\nmysql> flush privileges;\r\nQuery OK, 0 rows affected (0,00 sec)<\/pre>\n<p><strong>5. Validate instances, this is done accessing the MySQL Shell on all the three nodes and run the below command:<\/strong><\/p>\n<pre lang=\"javascript\">mysql-js> dba.validateInstance('root@localhost:3306')\r\nPlease provide a password for 'root@localhost:3306':\r\nValidating instance...\r\n\r\nRunning check command.\r\nChecking Group Replication prerequisites.\r\n* Comparing options compatibility with Group Replication... PASS\r\nServer configuration is compliant with the requirements.\r\n* Checking server version... PASS\r\nServer is 5.7.15\r\n\r\n* Checking that server_id is unique... PASS\r\nThe server_id is valid.\r\n\r\n* Checking compliance of existing tables... PASS\r\n\r\nThe instance: localhost:3306 is valid for Cluster usage<\/pre>\n<p>At this point in which we\u2019re going to start accessing instances all around, make sure you configure iptables appropriately or even, just flush all the configured chains on that in order to avoid the below message when accessing remote nodes:<\/p>\n<pre lang=\"text\">[root@box01 mysql]# mysql -u root -p -h box02\r\nEnter password:\r\nERROR 2003 (HY000): Can't connect to MySQL server on 'box02' (113)\r\n\r\n[root@box02 ~]# iptables -F\r\n[root@box02 ~]# systemctl firewalld stop\r\n\r\n[root@box01 mysql]# mysql -u root -p -h box02\r\nEnter password:\r\nWelcome to the MySQL monitor.  Commands end with ; or \\g.\r\nYour MySQL connection id is 4\r\nServer version: 5.7.15-labs-gr090-log MySQL Community Server (GPL)\r\n\r\nCopyright (c) 2000, 2016, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\r\n\r\nmysql> \\q\r\nBye<\/pre>\n<p><strong>6. At this point, we need to create a cluster:<\/strong><\/p>\n<p>Let\u2019s use box01 as the server in which we will create the cluster and bootstrap it, creating all the cluster\u2019s metadata.<\/p>\n<pre lang=\"bash\">#: create the cluster on box01\r\n[root@box01 mysql]# mysqlsh\r\nWelcome to MySQL Shell 1.0.5-labs Development Preview\r\n\r\nCopyright (c) 2016, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType '\\help', '\\h' or '\\?' for help, type '\\quit' or '\\q' to exit.\r\n\r\nCurrently in JavaScript mode. Use \\sql to switch to SQL mode and execute queries.\r\nmysql-js> \\c root@localhost:3306\r\nCreating a Session to 'root@localhost:3306'\r\nEnter password:\r\nClassic Session successfully established. No default schema selected.\r\n\r\nmysql-js> cluster = dba.createCluster('wbCluster001')\r\nA new InnoDB cluster will be created on instance 'root@localhost:3306'.\r\n\r\nWhen setting up a new InnoDB cluster it is required to define an administrative\r\nMASTER key for the cluster. This MASTER key needs to be re-entered when making\r\nchanges to the cluster later on, e.g.adding new MySQL instances or configuring\r\nMySQL Routers. Losing this MASTER key will require the configuration of all\r\nInnoDB cluster entities to be changed.\r\n\r\nPlease specify an administrative MASTER key for the cluster 'wbCluster001':\r\nCreating InnoDB cluster 'wbCluster001' on 'root@localhost:3306'...\r\nAdding Seed Instance...\r\n\r\nCluster successfully created. Use Cluster.addInstance() to add MySQL instances.\r\nAt least 3 instances are needed for the cluster to be able to withstand up to\r\none server failure.\r\n\r\nmysql-js><\/pre>\n<p>Now we can use the the value we stored on the variable cluster to exhibit the status of the just created cluster:<\/p>\n<pre lang=\"javascript\" line=\"1\">mysql-js> cluster.status()\r\n{\r\n    \"clusterName\": \"wbCluster001\",\r\n    \"defaultReplicaSet\": {\r\n        \"status\": \"Cluster is NOT tolerant to any failures.\",\r\n        \"topology\": {\r\n            \"localhost:3306\": {\r\n                \"address\": \"localhost:3306\",\r\n                \"status\": \"ONLINE\",\r\n                \"role\": \"HA\",\r\n                \"mode\": \"R\/W\",\r\n                \"leaves\": {}\r\n            }\r\n        }\r\n    }\r\n}<\/pre>\n<p>Cluster status at this point shows that it\u2019s not fault tolerant due to don\u2019t have any other node as part of the cluster wbCluster001. Another thing I verified here and it was present on the scenario of the previous post as well, is that the metadata is created on some tables on the database schema called mysql_innodb_cluster_metadata, added to the instance used to create the cluster and that will be the instance to manage the cluster.<\/p>\n<pre lang=\"bash\">#: box01, the instance used as the cluster\u2019s seed\r\nmysql> use mysql_innodb_cluster_metadata\r\nReading table information for completion of table and column names\r\nYou can turn off this feature to get a quicker startup with -A\r\n\r\nDatabase changed\r\nmysql> show tables;\r\n+-----------------------------------------+\r\n| Tables_in_mysql_innodb_cluster_metadata |\r\n+-----------------------------------------+\r\n| clusters                                |\r\n| hosts                                   |\r\n| instances                               |\r\n| replicasets                             |\r\n| schema_version                          |\r\n+-----------------------------------------+\r\n5 rows in set (0,00 sec)\r\n\r\nmysql> select cluster_id,cluster_name from mysql_innodb_cluster_metadata.clusters\\G\r\n*************************** 1. row ***************************\r\n  cluster_id: 1\r\ncluster_name: wbCluster001\r\n1 row in set (0,00 sec)<\/pre>\n<p><strong>7. Adding instances to the cluster:<\/strong><\/p>\n<p>By now, what we need to do is to start adding the instances we setup on our existing cluster and to do that, in case you don&#8217;t have the cluster&#8217;s name on cluster variable anymore, you can use mysqlsh, connect to the instance running on box01:3306 and user the <i>dba.getCluster(&#8216;wbCluster001&#8217;)<\/i> again. After doing that, you can move forward an execute the below addInstances() methods to add instances box02,box03 to the existing cluster.<\/p>\n<pre lang=\"javascript\">mysql-js> \\c root@192.168.50.11:3306\r\nCreating a Session to 'root@192.168.50.11:3306'\r\nEnter password:\r\nClassic Session successfully established. No default schema selected.\r\nmysql-js> cluster = dba.getCluster('wbCluster001')\r\nWhen the InnoDB cluster was setup, a MASTER key was defined in order to enable\r\nperforming administrative tasks on the cluster.\r\n\r\nPlease specify the administrative MASTER key for the cluster 'wbCluster001':\r\n<Cluster:wbCluster001>\r\n\r\n#: adding box02\r\nmysql-js> cluster.addInstance('root@192.168.50.12:3306')\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nPlease provide the password for 'root@192.168.50.12:3306':\r\nAdding instance to the cluster ...\r\n\r\nThe instance 'root@192.168.50.12:3306' was successfully added to the cluster.\r\n\r\n#: adding box03\r\nmysql-js> cluster.addInstance('root@192.168.50.13:3306')\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nPlease provide the password for 'root@192.168.50.13:3306':\r\nAdding instance to the cluster ...\r\n\r\nThe instance 'root@192.168.50.13:3306' was successfully added to the cluster.<\/pre>\n<p>At this point, configuring exactly the way you\u2019re reading above, I saw the error logs on both joiner nodes, box02 and box03, the following messages:<\/p>\n<pre lang=\"text\">2016-09-25T00:34:11.285509Z 61 [ERROR] Slave I\/O for channel 'group_replication_recovery': error connecting to master 'mysql_innodb_cluster_rpl_user@box01:3306' - retry-time: 60  retries: 1, Error_code: 2005\r\n2016-09-25T00:34:11.285535Z 61 [Note] Slave I\/O thread for channel 'group_replication_recovery' killed while connecting to master\r\n2016-09-25T00:34:11.285539Z 61 [Note] Slave I\/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4\r\n2016-09-25T00:34:11.285963Z 48 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Check group replication recovery's connection credentials.'\r\n2016-09-25T00:34:11.286204Z 48 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 8\/10\u2019<\/pre>\n<p>While more and more errors due to connection between joiner and donor were added to the error log, I added to all boxes some entries on \/etc\/hosts and than, the issue was fixed. So, this is very important to consider the configuration below added to the machines\u2019 hosts file to server as a DNS resolver. If you don\u2019t do that, when you check the cluster.status(), it\u2019s going to report that the joiner db node is in RECOVERY MODE as box03 or 192.168.50.13:3306 below.<\/p>\n<pre lang=\"javascript\" line=\"1\">mysql-js> cluster.status()\r\n{\r\n    \"clusterName\": \"wbCluster001\",\r\n    \"defaultReplicaSet\": {\r\n        \"status\": \"Cluster is NOT tolerant to any failures.\",\r\n        \"topology\": {\r\n            \"192.168.50.11:3306\": {\r\n                \"address\": \"192.168.50.11:3306\",\r\n                \"status\": \"ONLINE\",\r\n                \"role\": \"HA\",\r\n                \"mode\": \"R\/W\",\r\n                \"leaves\": {\r\n                    \"192.168.50.12:3306\": {\r\n                        \"address\": \"192.168.50.12:3306\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"192.168.50.13:3306\": {\r\n                        \"address\": \"192.168.50.13:3306\",\r\n                        \"status\": \"RECOVERING\u201d,\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    }\r\n}<\/pre>\n<p>As many attempts were done while I was fixing the problem related to the hosts file, I had to do a cluster.rejoinInstance for box03, as you can see below:<\/p>\n<pre lang=\"bash\">mysql-js> cluster.rejoinInstance('root@192.168.50.13:3306')\r\nPlease provide the password for 'root@192.168.50.13:3306':\r\nThe instance will try rejoining the InnoDB cluster. Depending on the original\r\nproblem that made the instance unavailable the rejoin, operation might not be\r\nsuccessful and further manual steps will be needed to fix the underlying\r\nproblem.\r\n\r\nPlease monitor the output of the rejoin operation and take necessary action if\r\nthe instance cannot rejoin.\r\nEnter the password for server (root@192.168.50.13:3306):\r\nEnter the password for replication_user (mysql_innodb_cluster_rpl_user):\r\nEnter the password for peer_server (root@192.168.50.12:3306):\r\n\r\nRunning join command on '192.168.50.13@3306'.\r\n\r\nRunning health command on '192.168.50.13@3306'.\r\nGroup Replication members:\r\n  - Host: box03\r\n    Port: 3306\r\n    State: ONLINE\r\n  - Host: box02\r\n    Port: 3306\r\n    State: ONLINE\r\n  - Host: box01\r\n    Port: 3306\r\n    State: ONLINE<\/pre>\n<p>So, at this point, the cluster is OK, all three nodes running well and fine:<\/p>\n<pre lang=\"javascript\" line=\"1\">#: describe cluster\r\nmysql-js> cluster.describe()\r\n{\r\n    \"clusterName\": \"wbCluster001\",\r\n    \"adminType\": \"local\",\r\n    \"defaultReplicaSet\": {\r\n        \"name\": \"default\",\r\n        \"instances\": [\r\n            {\r\n                \"name\": \"192.168.50.11:3306\",\r\n                \"host\": \"192.168.50.11:3306\",\r\n                \"role\": \"HA\"\r\n            },\r\n            {\r\n                \"name\": \"192.168.50.12:3306\",\r\n                \"host\": \"192.168.50.12:3306\",\r\n                \"role\": \"HA\"\r\n            },\r\n            {\r\n                \"name\": \"192.168.50.13:3306\",\r\n                \"host\": \"192.168.50.13:3306\",\r\n                \"role\": \"HA\"\r\n            }\r\n        ]\r\n    }\r\n}\r\n#: cluster status\r\n\r\nmysql-js> cluster.status()\r\n{\r\n    \"clusterName\": \"wbCluster001\",\r\n    \"defaultReplicaSet\": {\r\n        \"status\": \"Cluster is tolerant to 2 failures.\",\r\n        \"topology\": {\r\n            \"192.168.50.11:3306\": {\r\n                \"address\": \"192.168.50.11:3306\",\r\n                \"status\": \"ONLINE\",\r\n                \"role\": \"HA\",\r\n                \"mode\": \"R\/W\",\r\n                \"leaves\": {\r\n                    \"192.168.50.12:3306\": {\r\n                        \"address\": \"192.168.50.12:3306\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"192.168.50.13:3306\": {\r\n                        \"address\": \"192.168.50.13:3306\",\r\n                        \"status\": \u201cONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    }\r\n}<\/pre>\n<p>After solving the issues above mentioned, I saw the following events added to the error log on box02 and box03:<\/p>\n<pre lang=\"text\">#: box02\r\n2016-09-26T14:07:02.432632Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'\r\n\r\n#: box03\r\n2016-09-26T14:14:52.432632Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'<\/pre>\n<p>At the end, you can check that the MySQL Group Replication is the underlying feature that empower MySQL InnoDB Cluster. On box01, or, 192.168.50.11:3306:<\/p>\n<pre lang=\"mysql\">mysql-sql> select * from performance_schema.replication_group_members;\r\n+---------------------------+--------------------------------------+-------------+-------------+--------------+\r\n| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |\r\n+---------------------------+--------------------------------------+-------------+-------------+--------------+\r\n| group_replication_applier | b0b1603f-83ef-11e6-85a6-080027de0e0e | box01       |        3306 | ONLINE       |\r\n| group_replication_applier | bb29750c-83ef-11e6-8b4f-080027de0e0e | box02       |        3306 | ONLINE       |\r\n| group_replication_applier | bbu3761b-83ef-11e6-894c-080027de0t0e | box03       |        3306 | ONLINE       |\r\n+---------------------------+--------------------------------------+-------------+-------------+--------------+\r\n3 rows in set (0.00 sec)<\/pre>\n<p>Next time, I\u2019m going to bootstrap the router to show some tests related to the routing connections out of failed nodes. The final considerations over this new way to provide HA to an environment using InnoDB are, there is no documentation enough yet regrading the exiting methods to manipulate instances within the cluster, in case you need to take one off, restart it or even get to know why they are OFFLINE, I haven&#8217;t found yet a way to better manipulate nodes but add them to the cluster. This is not GA, the feature was just released, to me it&#8217;s very promising and will make it easier to add clusters and I expect to see more and more about this. Once again, great job Oracle MySQL Team, let&#8217;s move on!!<\/p>\n<p>You can find more resources on below links:<\/p>\n<p>&#8211; http:\/\/mysqlserverteam.com\/introducing-mysql-innodb-cluster-a-hands-on-tutorial\/<br \/>\n&#8211; http:\/\/mysqlserverteam.com\/introducing-mysql-innodb-cluster-mysql-ha-out-of-box-easy-to-use-high-availability\/<\/p>\n<p>Arrivederci!!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this post I\u2019m going to extend the tests I made with MySQL InnoDB Cluster on the previous post, creating a group of instances with separate servers, that is, I\u2019m going to test how to create a new cluster with three different machines considering that, if you create a cluster using one giant server, maybe [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1259"}],"collection":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1259"}],"version-history":[{"count":44,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1259\/revisions"}],"predecessor-version":[{"id":1305,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1259\/revisions\/1305"}],"wp:attachment":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1259"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1259"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1259"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}