{"id":1229,"date":"2016-09-20T21:34:44","date_gmt":"2016-09-21T00:34:44","guid":{"rendered":"http:\/\/wagnerbianchi.com\/blog\/?p=1229"},"modified":"2016-09-21T20:55:05","modified_gmt":"2016-09-21T23:55:05","slug":"testing-the-new-mysql-innodb-cluster","status":"publish","type":"post","link":"http:\/\/wagnerbianchi.com\/blog\/?p=1229","title":{"rendered":"Testing the New MySQL InnoDB Cluster"},"content":{"rendered":"<p><a href=\"http:\/\/wagnerbianchi.com\/blog\/wp-content\/uploads\/2016\/09\/cluster-server.jpg\" rel=\"attachment wp-att-1245\"><img decoding=\"async\" loading=\"lazy\" class=\"alignleft wp-image-1245\" src=\"http:\/\/wagnerbianchi.com\/blog\/wp-content\/uploads\/2016\/09\/cluster-server.jpg\" alt=\"cluster-server\" width=\"320\" height=\"235\" srcset=\"http:\/\/wagnerbianchi.com\/blog\/wp-content\/uploads\/2016\/09\/cluster-server.jpg 374w, http:\/\/wagnerbianchi.com\/blog\/wp-content\/uploads\/2016\/09\/cluster-server-300x220.jpg 300w\" sizes=\"(max-width: 320px) 100vw, 320px\" \/><\/a>After receiving the announcement done by Oracle via Lefred, I got myself very curious about the new MySQL InnoDB Cluster. After watching the video, I downloaded the package, got the online manual and started playing with it. My first impressions was that it has the simplicity of the MongoDB Shell, but it more resilience because it is a master-master cluster, having a node assuming the PRIMARY role when a existing one should crash. It&#8217;s really good to have something very simple like this on the MySQL World because IMHO, all we have until now requires some time to setup and have running &#8211; KISS is a very good idea and MySQL InnoDB Cluster, I see that it was created to be simple to setup, congrats for that Oracle!<\/p>\n<p><strong>After Downloading Packages&#8230;<\/strong><\/p>\n<p>After getting the packages on a vagrant VM, is just untar it and then, I saw that the package is made by three other main packages:<\/p>\n<pre lang=\"bash\">[root@box01 ~]# wget http:\/\/downloads.mysql.com\/snapshots\/pb\/mysql-innodb-cluster-5.7.15-preview\/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz\r\n--2016-09-21 00:25:52-- http:\/\/downloads.mysql.com\/snapshots\/pb\/mysql-innodb-cluster-5.7.15-preview\/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz\r\nResolving downloads.mysql.com (downloads.mysql.com)... 137.254.60.14\r\nConnecting to downloads.mysql.com (downloads.mysql.com)|137.254.60.14|:80... connected.\r\nHTTP request sent, awaiting response... 200 OK<\/pre>\n<pre lang=\"bash\">[root@box01 ~]# ls -lh\r\ntotal 1.1G\r\n-rw-r--r-- 1 7155 31415 490M Sep 16 10:14 mysql-5.7.15-labs-gr090-el7-x86_64.rpm-bundle.tar\r\n-rw-r--r-- 1 root root 536M Sep 16 10:18 mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz\r\n-rw-r--r-- 1 7155 31415 4.5M Sep 16 10:14 mysql-router-2.1.0-0.1-labs-el7-x86_64.rpm-bundle.tar\r\n-rw-r--r-- 1 7155 31415 44M Sep 16 10:14 mysql-shell-1.0.5-0.1-labs-el7-x86_64.rpm-bundle.tar<\/pre>\n<p>Yeah, all packages after tar zvxf has 1.1G size! It&#8217;s cool as this comprised by all 5.7 MySQL Server packages, the MySQL Router and the MySQL Shell.<\/p>\n<pre lang=\"bash\">[root@box01 ~]# ls -lhR\r\n.:\r\ntotal 1.1G\r\n-rw-------. 1 root root 1.4K Jul 16 2015 anaconda-ks.cfg\r\n-rw-r--r-- 1 7155 31415 490M Sep 16 10:14 mysql-5.7.15-labs-gr090-el7-x86_64.rpm-bundle.tar\r\n-rw-r--r-- 1 root root 536M Sep 16 10:18 mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz\r\n-rw-r--r-- 1 7155 31415 4.5M Sep 16 10:14 mysql-router-2.1.0-0.1-labs-el7-x86_64.rpm-bundle.tar\r\n-rw-r--r-- 1 7155 31415 44M Sep 16 10:14 mysql-shell-1.0.5-0.1-labs-el7-x86_64.rpm-bundle.tar\r\ndrwxr-xr-x 2 root root 4.0K Sep 21 01:32 rpms\r\n\r\n.\/rpms:\r\ntotal 538M\r\n-rw-r--r-- 1 7155 31415 24M Sep 15 11:01 mysql-community-client-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 272K Sep 15 11:01 mysql-community-common-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 3.6M Sep 15 11:01 mysql-community-devel-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 44M Sep 15 11:01 mysql-community-embedded-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 23M Sep 15 11:01 mysql-community-embedded-compat-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 120M Sep 15 11:01 mysql-community-embedded-devel-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 2.2M Sep 15 11:02 mysql-community-libs-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 2.1M Sep 15 11:02 mysql-community-libs-compat-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 161M Sep 15 11:02 mysql-community-server-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 112M Sep 15 11:02 mysql-community-test-5.7.15-1.labs_gr090.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 1.2M Sep 16 09:43 mysql-router-2.1.0-0.1.labs.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 3.3M Sep 16 09:43 mysql-router-debuginfo-2.1.0-0.1.labs.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 4.2M Sep 16 09:43 mysql-shell-1.0.5-0.1.labs.el7.x86_64.rpm\r\n-rw-r--r-- 1 7155 31415 40M Sep 16 09:43 mysql-shell-debuginfo-1.0.5-0.1.labs.el7.x86_64.rpm<\/pre>\n<p>So, let&#8217;s get this installed, I recommend you to use yum to resolve dependencies.<\/p>\n<pre lang=\"bash\">[root@box01 ~]# yum -y install *.rpm\r\n[...snip...]\r\nInstalled:\r\nmysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-embedded.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-embedded-compat.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-embedded-devel.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-community-test.x86_64 0:5.7.15-1.labs_gr090.el7\r\nmysql-router.x86_64 0:2.1.0-0.1.labs.el7\r\nmysql-router-debuginfo.x86_64 0:2.1.0-0.1.labs.el7\r\nmysql-shell.x86_64 0:1.0.5-0.1.labs.el7\r\nmysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7\r\n\r\nDependency Installed:\r\nperl-Data-Dumper.x86_64 0:2.145-3.el7\r\n\r\nReplaced:\r\nmariadb-libs.x86_64 1:5.5.41-2.el7_0<\/pre>\n<p>Now it&#8217;s time to start the a MySQL InnoDB Cluster! From now on, make sure you&#8217;re using a user different than root!<\/p>\n<p>First step, start MySQL 5.7 and change the root password as we do for a normal MySQL instance:<\/p>\n<pre lang=\"bash\">[wb@box01 rpms]# systemctl start mysqld.service\r\n[wb@box01 rpms]# cat \/var\/log\/mysqld.log | grep temp\r\n2016-09-20T23:45:06.950465Z 1 [Note] A temporary password is generated for root@localhost: agaUf8YrhQ!R\r\n2016-09-20T23:45:10.198806Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables\r\n[wb@box01 rpms]# mysql -p\r\nEnter password:\r\nWelcome to the MySQL monitor. Commands end with ; or \\g.\r\nYour MySQL connection id is 2\r\nServer version: 5.7.15-labs-gr090\r\n\r\nCopyright (c) 2000, 2016, Oracle and\/or its affiliates. All rights reserved.\r\n\r\nOracle is a registered trademark of Oracle Corporation and\/or its\r\naffiliates. Other names may be trademarks of their respective\r\nowners.\r\n\r\nType 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.\r\n\r\nmysql&gt; alter user root@localhost identified by 'P@ssw0rd';\r\nQuery OK, 0 rows affected (0.00 sec)\r\n\r\nmysql&gt; \\q\r\nBye<\/pre>\n<p>At this point, if you tried to create an instance, for example on port 3310 before it has failed, the directory \/root\/mysql-sandboxes\/3310 won&#8217;t be empty the an error will be raised if you try that again. Make sure you have that directory clean to create this instance again:<\/p>\n<pre lang=\"bash\">Please enter a MySQL root password for the new instance:\r\nDeploying new MySQL instance...\r\nERROR: Error executing the 'sandbox create' command: The sandbox dir '\/root\/mysql-sandboxes\/3310' is not empty.\r\n<\/pre>\n<p>So, having the root P@ssw0rd for MySQL 5.7 and having it running right now, let&#8217;s deploy the instances that will be added soon to our InnoDB Cluster. Below I added 5 instances:<\/p>\n<pre lang=\"bash\" line=\"1\">mysql-js&gt; dba.deployLocalInstance(3310)\r\nA new MySQL sandbox instance will be created on this host in\r\n\/home\/wb\/mysql-sandboxes\/3310\r\n\r\nPlease enter a MySQL root password for the new instance:\r\nDeploying new MySQL instance...\r\n\r\nInstance localhost:3310 successfully deployed and started.\r\nUse '\\connect root@localhost:3310' to connect to the instance.\r\n\r\nmysql-js&gt; dba.deployLocalInstance(3311)\r\nA new MySQL sandbox instance will be created on this host in\r\n\/home\/wb\/mysql-sandboxes\/3311\r\n\r\nPlease enter a MySQL root password for the new instance:\r\nDeploying new MySQL instance...\r\n\r\nInstance localhost:3311 successfully deployed and started.\r\nUse '\\connect root@localhost:3311' to connect to the instance.\r\n\r\nmysql-js&gt; dba.deployLocalInstance(3312)\r\nA new MySQL sandbox instance will be created on this host in\r\n\/home\/wb\/mysql-sandboxes\/3312\r\n\r\nPlease enter a MySQL root password for the new instance:\r\nDeploying new MySQL instance...\r\n\r\nInstance localhost:3312 successfully deployed and started.\r\nUse '\\connect root@localhost:3312' to connect to the instance.\r\n\r\nmysql-js&gt; dba.deployLocalInstance(3313)\r\nA new MySQL sandbox instance will be created on this host in\r\n\/home\/wb\/mysql-sandboxes\/3313\r\n\r\nPlease enter a MySQL root password for the new instance:\r\nDeploying new MySQL instance...\r\n\r\nInstance localhost:3313 successfully deployed and started.\r\nUse '\\connect root@localhost:3313' to connect to the instance.\r\n\r\nmysql-js&gt; dba.deployLocalInstance(3314)\r\nA new MySQL sandbox instance will be created on this host in\r\n\/home\/wb\/mysql-sandboxes\/3314\r\n\r\nPlease enter a MySQL root password for the new instance:\r\nDeploying new MySQL instance...\r\n\r\nInstance localhost:3314 successfully deployed and started.\r\nUse '\\connect root@localhost:3314' to connect to the instance.<\/pre>\n<p>As the manual says, the nest step is to initialize the cluster, after connecting to on of the instances we created previously and we can choose any of the instances to use as a point to initialize the cluster:<\/p>\n<pre lang=\"bash\">mysql-js&gt; \\connect root@localhost:3310\r\nCreating a Session to 'root@localhost:3310'\r\nEnter password:\r\nClassic Session successfully established. No default schema selected.\r\nmysql-js&gt; cluster = dba.createCluster('wbCluster001')\r\nA new InnoDB cluster will be created on instance 'root@localhost:3310'.\r\n\r\nWhen setting up a new InnoDB cluster it is required to define an administrative\r\nMASTER key for the cluster. This MASTER key needs to be re-entered when making\r\nchanges to the cluster later on, e.g.adding new MySQL instances or configuring\r\nMySQL Routers. Losing this MASTER key will require the configuration of all\r\nInnoDB cluster entities to be changed.\r\n\r\nPlease specify an administrative MASTER key for the cluster 'wbCluster001':\r\nCreating InnoDB cluster 'wbCluster001' on 'root@localhost:3310'...\r\nAdding Seed Instance...\r\n\r\nCluster successfully created. Use Cluster.addInstance() to add MySQL instances.\r\nAt least 3 instances are needed for the cluster to be able to withstand up to\r\none server failure.\r\n\r\n\r\nmysql-js&gt;<\/pre>\n<p>A MASTER key is required to create the cluster, make sure the value you inform as a MASTER key is well protected and you don&#8217;t lose it &#8211; it&#8217;s a important thing for the InnoDB Cluster management.<\/p>\n<p><strong>So, our MySQL InnoDB Cluster is created, Voil\u00e0!<\/strong><\/p>\n<p>The next step is to add the instances, now replicas, to the existing MySQL InnoDB Cluster which is <strong>wbCluster001<\/strong>.<\/p>\n<pre lang=\"bash\" line=\"1\">mysql-js&gt; cluster.addInstance('root@localhost:3311')\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nPlease provide the password for 'root@localhost:3311':\r\nAdding instance to the cluster ...\r\n\r\nThe instance 'root@localhost:3311' was successfully added to the cluster.\r\n\r\nmysql-js&gt; cluster.addInstance('root@localhost:3312')\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nPlease provide the password for 'root@localhost:3312':\r\nAdding instance to the cluster ...\r\n\r\nThe instance 'root@localhost:3312' was successfully added to the cluster.\r\n\r\nmysql-js&gt; cluster.addInstance('root@localhost:3313')\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nPlease provide the password for 'root@localhost:3313':\r\nAdding instance to the cluster ...\r\n\r\nThe instance 'root@localhost:3313' was successfully added to the cluster.\r\n\r\nmysql-js&gt; cluster.addInstance('root@localhost:3314')\r\nA new instance will be added to the InnoDB cluster. Depending on the amount of\r\ndata on the cluster this might take from a few seconds to several hours.\r\n\r\nPlease provide the password for 'root@localhost:3314':\r\nAdding instance to the cluster ...\r\n\r\nThe instance 'root@localhost:3314' was successfully added to the cluster.<\/pre>\n<p>Finally, we can check the whole cluster:<\/p>\n<pre lang=\"javascript\" line=\"1\">mysql-js&gt; cluster.status()\r\n{\r\n    \"clusterName\": \"wbCluster001\",\r\n    \"defaultReplicaSet\": {\r\n        \"status\": \"Cluster tolerant to up to 3 failures.\",\r\n        \"topology\": {\r\n            \"localhost:3310\": {\r\n                \"address\": \"localhost:3310\",\r\n                \"status\": \"ONLINE\",\r\n                \"role\": \"HA\",\r\n                \"mode\": \"R\/W\",\r\n                \"leaves\": {\r\n                    \"localhost:3311\": {\r\n                        \"address\": \"localhost:3311\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3312\": {\r\n                        \"address\": \"localhost:3312\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3313\": {\r\n                        \"address\": \"localhost:3313\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3314\": {\r\n                        \"address\": \"localhost:3314\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    }\r\n}<\/pre>\n<p>Beautiful!! All nodes reporting the status ONLINE when it could be reporting OFFLINE or RECOVERING when it&#8217;s receiving updates, catching up with the cluster&#8217;s state, as when we add a new node to an existing cluster. Additionally, just the bootstrapped node is in R\/W mode at this point and the others are in R\/O. That means that the solutions was designed to support writes in one node, that is considered as PRIMARY and the others are considered SECONDARIES. When the current primary goes down, one of the secondaries will assume the role.<\/p>\n<p>At this point we can check another things regrading the MySQL InnoDB Cluster.<\/p>\n<pre lang=\"bash\">#: local instances metadata\r\n[wb@box01 ~]$ ls -lh ~\/mysql-sandboxes\/\r\ninsgesamt 24K\r\ndrwxrwxr-x 4 wb wb 4,0K 22. Sep 01:00 3310\r\ndrwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3311\r\ndrwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3312\r\ndrwxrwxr-x 4 wb wb 4,0K 22. Sep 01:02 3313\r\ndrwxrwxr-x 4 wb wb 4,0K 22. Sep 01:03 3314\r\ndrwxrwxr-x 4 wb wb 4,0K 22. Sep 01:03 3315\r\n\r\n#: sockets open\r\n[wb@box01 ~]$ netstat -na | grep sand\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25608    \/home\/wb\/mysql-sandboxes\/3315\/mysqlx.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25613    \/home\/wb\/mysql-sandboxes\/3315\/mysqld.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25386    \/home\/wb\/mysql-sandboxes\/3313\/mysqlx.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25391    \/home\/wb\/mysql-sandboxes\/3313\/mysqld.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25275    \/home\/wb\/mysql-sandboxes\/3312\/mysqlx.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25280    \/home\/wb\/mysql-sandboxes\/3312\/mysqld.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         24903    \/home\/wb\/mysql-sandboxes\/3310\/mysqlx.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         24908    \/home\/wb\/mysql-sandboxes\/3310\/mysqld.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25166    \/home\/wb\/mysql-sandboxes\/3311\/mysqlx.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25171    \/home\/wb\/mysql-sandboxes\/3311\/mysqld.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25497    \/home\/wb\/mysql-sandboxes\/3314\/mysqlx.sock\r\nunix  2      [ ACC ]     STREAM     H\u00d6RT         25502    \/home\/wb\/mysql-sandboxes\/3314\/mysqld.sock\r\n<\/pre>\n<p>If you disconnected from <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/5.7\/en\/mysql-shell-features.html\" target=\"_blank\">mysqlsh<\/a> and would like to get back connected with your created cluster, you need to access the instance you used to create the seed and then use the dba.getCluster() in order to set a variable with the name of the cluster you want to check and then, use the cluster.status again, as below:<\/p>\n<pre lang=\"bash\">mysql-js> \\connect root@localhost:3310\r\nCreating a Session to 'root@localhost:3310'\r\nEnter password:\r\nClassic Session successfully established. No default schema selected.\r\nmysql-js> cluster = dba.getCluster()\r\nWhen the InnoDB cluster was setup, a MASTER key was defined in order to enable\r\nperforming administrative tasks on the cluster.\r\n\r\nPlease specify the administrative MASTER key for the default cluster:\r\n<Cluster:wbCluster001><\/pre>\n<p>And the cluster.status()<\/p>\n<pre lang=\"javascript\" line=\"1\">mysql-js> cluster.status()\r\n{\r\n    \"clusterName\": \"wbCluster001\",\r\n    \"defaultReplicaSet\": {\r\n        \"status\": \"Cluster tolerant to up to 4 failures.\",\r\n        \"topology\": {\r\n            \"localhost:3310\": {\r\n                \"address\": \"localhost:3310\",\r\n                \"status\": \"ONLINE\",\r\n                \"role\": \"HA\",\r\n                \"mode\": \"R\/W\",\r\n                \"leaves\": {\r\n                    \"localhost:3311\": {\r\n                        \"address\": \"localhost:3311\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3312\": {\r\n                        \"address\": \"localhost:3312\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3313\": {\r\n                        \"address\": \"localhost:3313\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3314\": {\r\n                        \"address\": \"localhost:3314\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    },\r\n                    \"localhost:3315\": {\r\n                        \"address\": \"localhost:3315\",\r\n                        \"status\": \"ONLINE\",\r\n                        \"role\": \"HA\",\r\n                        \"mode\": \"R\/O\",\r\n                        \"leaves\": {}\r\n                    }\r\n                }\r\n            }\r\n        }\r\n    }\r\n}\r\nmysql-js> \\q\r\nBye!<\/pre>\n<p><strong>More resources:<\/strong><\/p>\n<ul>\n<li>Docs:\u00a0https:\/\/dev.mysql.com\/doc\/mysql-innodb-cluster\/en\/<\/li>\n<\/ul>\n<p><iframe loading=\"lazy\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/JWy7ZLXxtZ4?feature=oembed\" frameborder=\"0\" allowfullscreen><\/iframe><\/p>\n","protected":false},"excerpt":{"rendered":"<p>After receiving the announcement done by Oracle via Lefred, I got myself very curious about the new MySQL InnoDB Cluster. After watching the video, I downloaded the package, got the online manual and started playing with it. My first impressions was that it has the simplicity of the MongoDB Shell, but it more resilience because [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[4],"tags":[],"_links":{"self":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1229"}],"collection":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=1229"}],"version-history":[{"count":27,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1229\/revisions"}],"predecessor-version":[{"id":1258,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=\/wp\/v2\/posts\/1229\/revisions\/1258"}],"wp:attachment":[{"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=1229"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=1229"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/wagnerbianchi.com\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=1229"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}