wagnerbianchi.com

Starting with MySQL Cluster

MySQL Cluster originates from a product called Network DataBase from Ericson Telco, located on Swedish.  Ericson’s intention was to have a database running on network as a service. Today’s MySQL Cluster have its name NDB due to the original technology name. For example, the name of the Storage Engine used connected to MySQL Server is NDB (instead of InnoDB or MyISAM). IN general, whether you see NDBCLUSTER, you can think of it as “MySQL Cluster”.

Usually we have many parts involved in a cluster technology and in MySQL Cluster it isn’t different. MySQL Cluster implements three node’s kind to achieve the objective to have no SPOF (Single Point Of Failure) and eliminate downtime possibilities in order to have data available for more time. Besides, the automatic fail over processes happens on a regular basis when some of nodes crash in the middle of operations. Well, there are:

  • Management Node: this node have its function to serve environment as a management. The client program called ndb_mgm connects with its daemon called ndb_mgmd and can be used to retrieve information about other connected cluster nodes and execute some services as cluster backup, for example;
  • Storage Data (or Data Node): connected with management, these nodes are the cluster’s storage that will be used to retrieve data from databases. I strongly recommend start a new MySQL Cluster with at least two data nodes (four is better);
  • API Node (or SQL Node): this node is responsible for receive all external interaction (SQL commands) and manipulates data on storage nodes.

After to understand better what is each presented cluster node, we need to know how to start a configuration of a simple MySQL Cluster, what is the hottest topic of this article. Due to MySQL Cluster has its architecture shared-nothing based, e.g., each component has its own hardware and structure, we must initiate this “simple” project using some virtual machine software (I am using Oracle VirtualBox) to create at least five machines which will have name’s node at hostname, firewalls disabled, SELinux disabled and a static network IP. I am using CentOS 5.5 as operating system and MySQL Cluster 7.1.9.

The first step is configure out the node1 what I set up as Management Node, the node what will serve to retrieve all information about all other cluster nodes and execute services as a backup, start and restart nodes. During its machine operating system installment, you must ensure that all firewalls and SELinux were disabled (MySQL Cluster have problems with firewalls because they need use a couple of it). Configure OS static IP and download MySQL-cluster-gpl-management-xxxxx-rhel-x86-64.rpm and MySQL-cluster-gpl-tools-xxxxx-rhel-x86-64.rpm. After it, we’ll log on linux terminal as a root, create new directory named mysql-cluster under /usr/local and move files from Download directory (I am using firefox with default configuration) to /usr/local/mysql-cluster. We need to create the DataDir where will be found ndb_mgmd log files.

[ root@node1 ~ ] mkdir -p /usr/local/mysql-cluster
[ root@node1 ~ ] mkdir -p /var/lib/mysql-cluster
[ root@node1 ~ ] mv /home/bianchi/Desktop/Downloads/MySQL-* /usr/local/mysql-cluster
[ root@node1 ~ ] cd /usr/local/mysql-cluster

After to download right MySQL Cluster files to apply on Management Node, we need to create the MySQL Cluster configuration file (using your preferred text editor). This file will be located at /usr/local/mysql-cluster/ and its name will be config.ini.

[ root@node1 mysql-cluster ] pwd
/usr/local/mysql-cluster
[ root@node1 mysql-cluster ] vim config.ini

# Into this file we’ll put all configurations required to all nodes. Look for a comments that I’ll use # into this file (commands are used after # signal).

[ndb_mgmd]
#
# Configurations used to control ndb_mgmd behavior
#
NodeId=1
HostName=192.168.0.101
DataDir=/var/lib/mysql-cluster

[ndb_default]
#
# Configurations that will be inherited for all storage/data node
#
DataDir=/var/lib/mysql-cluster
NoOfReplicas=2

[ndb]
# registering new storage node
NodeId=3
HostName=192.168.0.102

[ndb]
# registering new storage node
NodeId=4
HostName=192.168.0.103

[mysqld]
# registering new API/SQL node
NodeId=11
HostName=192.168.0.104

 

[mysqld]
# registering new API/SQL node
NodeId=12
HostName=192.168.0.105

Save and close config.ini file. This file contain configurations to start up a MySQL Cluster with 1 management node, 2 storage nodes and 2 SQL nodes. Now, we will proceed with the management node software installation. As we are working with rpm package, we need only a command to install all MySQL packages located at /usr/local/mysql-cluster.

[ root@node1 mysql-cluster ] rpm -ivh MySQL-*
Preparing... #################################### [100%]
1:MySQL-Cluster-gpl-management ################## [100%]
2:MySQL-Cluster-gpl-tools ####################### [100%]
[ root@node1 mysql-cluster ]

After install, start ndb_mgmd indicating previous crated config.ini file:

[ root@node1 mysql-cluster ] pwd
/usr/local/mysql-cluster
[ root@node1 mysql-cluster ] ndb_mgmd --config-file=config.ini
MySQL Cluster Management Server mysql-5.1.51 ndb-7.1.9
[ root@node1 mysql-cluster ]

We can use $? shell variable to check if some error was rose when we started ndb_mgmd:

[ root@node1 mysql-cluster ] echo $?
0
[ root@node1 mysql-cluster ]

Shell $? variable can return the possible errors below:

0 -> no errors during last script execution, e.g., execution success
1 -> an unknown error occurred
2 -> an command into script error was detected
127 -> some nonexistent command was entered and rose an error

As you can see, our Management Node is up & running at this time and we can issue some commands to list all cluster’s node members. Type just ndb_mgm to use this client to connect with ndb_mgmd and retrieve information, as you see below:

[ root@node1 ~ ] # ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> SHOW
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 (not connected, accepting connect from 192.168.0.101)
id=4 (not connected, accepting connect from 192.168.0.102)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.100 (mysql-5.1.51 ndb-7.1.9)

 

[mysqld(API)] 4 node(s)
id=11 (not connected, accepting connect from 192.168.0.103)
id=12 (not connected, accepting connect from 192.168.0.104)

Naturally, as you can see above, we don’t have any nodes connected but management node. Now we have completed the first part of five job’s parts. Next I’ll register here how to configure Storage/Data Nodes to start it connecting the Management Node.

See you.


You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

AddThis Social Bookmark Button

Leave a Reply