wagnerbianchi.com

The MariaDB Storage-Engine Independent Column Compression

agosto 2nd, 2021 Bianchi Posted in MySQL Tuning No Comments »

One of the features DBAs have on their sleeves is the compression of the data living on databases. For the MariaDB Server, this is not any different. Among some options, we see available for compressing data and save sometimes a bunch of space, one of them certainly is the Storage-Engine Independent Column Compression, which makes it possible to compress data on the column level.

The motivation to analyse this feature on MariaDB Community Server came by after reading MDEV-22367, which claims that MariaDB should retain tables created with the InnoDB having the ROW_FORMAT=COMPRESSED as read-only by default, as mentioned on the notable changes for the MariaDB 10.6 (InnoDB). So, the Colum Compression appears to be, at the initial moment, an alternative if you want to make the compression a little more granular instead of running your databases with the innodb_read_only_compressed as OFF.

You must understand that compressing the whole table with the InnoDb ROW_FORMAT as COMPRESSED is different from compressing the columns of a table with the Storage-Engine Independent Column Compression.

So, the Storage-Engine Independent Column Compression will help compress columns of tables of one of the following data types: TINYBLOB, BLOB, MEDIUMBLOB, LONGBLOB, TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT, VARCHAR, and VARBINARY. You only need to worry about adding the COMPRESSED work to the columns, initially, to get it compressed. I would also like to call the attention that you’re pretty much covered if you are using the JSON data type added to the MariaDB Server at its version 10.2.7. The JSON data type is mapped out to the LONGTEXT data type mentioned previously.

Before we start creating tables and adding compression, let’s see system and status variables available:

MariaDB [(none)]> show global variables where variable_name in ('column_compression_threshold','column_compression_zlib_level','column_compression_zlib_strategy','column_compression_zlib_wrap');
+----------------------------------+------------------+
| Variable_name                    | Value            |
+----------------------------------+------------------+
| column_compression_threshold     | 100              |
| column_compression_zlib_level    | 6                |
| column_compression_zlib_strategy | DEFAULT_STRATEGY |
| column_compression_zlib_wrap     | OFF              |
+----------------------------------+------------------+
4 rows in set (0.003 sec)

MariaDB [(none)]> show global status where variable_name in ('Column_compressions','Column_decompressions');
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| Column_compressions   |  0    |
| Column_decompressions |  0    |
+-----------------------+-------+
2 rows in set (0.001 sec)

OK, having said that, let’s create a simple table having three columns and one of them that we are supposed to add compression soon issuing an ALTER TABLE command. All the exercises here will be done using the MariaDB Community Server 10.6.3 on Ubuntu 20.04.2 LTS (mariadb:latest docker image), but the Columnar Compression sed here is supported since MariaDB Server 10.3.2.

CREATE TABLE `t1` (
`a` int(11) NOT NULL AUTO_INCREMENT,
`b` varchar(255) DEFAULT NULL,
`c` blob DEFAULT NULL,
PRIMARY KEY (`a`)
) ENGINE=InnoDB AUTO_INCREMENT=84525 DEFAULT CHARSET=utf8mb4;

Let’s add some rows to the table:

root@e7bc0381525d:/# for i in {01..120176}; do mariadb -e 'INSERT INTO test.t1 set a=null, b=REPEAT('b',255), c=REPEAT('c', 65535);'; done
root@e7bc0381525d:/#

Let’s check the size of the table t1 tablespace:

root@e7bc0381525d:/# mariadb -e 'select count(*) from test.t1'
+----------+
| count(*) |
+----------+
| 120176 |
+----------+
root@e7bc0381525d:/# ls -lh /var/lib/mysql/test
total 12M
-rw-rw---- 1 mysql mysql 67 Aug 2 15:32 db.opt
-rw-rw---- 1 mysql mysql 2.0K Aug 2 17:39 t1.frm
-rw-rw---- 1 mysql mysql 11M Aug 2 18:52 t1.ibd

Let’s add compression to columns b and c:

root@e7bc0381525d:/# mariadb -e 'alter table test.t1 change b b varchar(255) compressed, change c c blob compressed;'

root@e7bc0381525d:/# mariadb -e 'show create table test.t1'
CREATE TABLE `t1` (
 `a` int(11) NOT NULL AUTO_INCREMENT,
 `b` varchar(255) /*!100301 COMPRESSED*/ DEFAULT NULL,
 `c` blob /*!100301 COMPRESSED*/ DEFAULT NULL,
 PRIMARY KEY (`a`)
) ENGINE=InnoDB AUTO_INCREMENT=120620 DEFAULT CHARSET=utf8mb4;
root@e7bc0381525d:/# ls -lh /var/lib/mysql/test
total 4.2M
-rw-rw---- 1 mysql mysql 67 Aug 2 15:32 db.opt
-rw-rw---- 1 mysql mysql 2.0K Aug 2 19:00 t1.frm
-rw-rw---- 1 mysql mysql 4.0M Aug 2 19:00 t1.ibd

Let’s check our status variables:

MariaDB [test]> show global status where variable_name in ('Column_compressions','Column_decompressions');
+-----------------------+-------+
| Variable_name         | Value |
+-----------------------+-------+
| Column_compressions   | 22    |
| Column_decompressions | 22    |
+-----------------------+-------+
2 rows in set (0.001 sec)

So, from 11MB to 4 MB, we’re talking about a compression rate of ~63%. The remaining question here is that this rate can vary considering the size of the tablespace? Maybe you can share your experience by adding your comment – any comments are really welcome.

Attention:

This blog post isn’t to encourage or discourage anything; this is meant to mainly exercise the column compression, which appears to be a good feature and must be more used to offer insights for improving it.

 

AddThis Social Bookmark Button

MariaDB MaxScale like a Pro: Setting up MaxScale 2.3

agosto 5th, 2019 Bianchi Posted in MySQL Tuning No Comments »

I created this series of blog posts after being worked with the MariaDB MaxScale for many customers. All the points mentioned here reflect my views; I’ll add links to the online docs so we can have an official reference. I intend to share my experiences working with MaxScale; we need more practical documentation so we improve the MaxScale usage and transfer knowledge.

First of all, MaxScale 2.3 release, notes, take a look.

Something you need to know before starting with the praxis here – all instances are running Debian 9.

root@prod-mariadb01:~# lsb_release -a
No LSB modules are available.
Distributor ID:	Debian
Description:	Debian GNU/Linux 9.9 (stretch)
Release:	9.9
Codename:	stretch

MariaDB MaxScale in a nutshell…

The MariaDB MaxScale is an intelligent database proxy which understands SQL language and has a bunch of bundle modules known as routers or services, monitors, filters, etc. After setting up the MaxScale packages, you have access to all bundle modules, as you need only to add a basic configuration file and start the service. Since version 2.1, you don’t need to elaborate a complete configuration file (/etc/maxscale.cnf) to start the service. Once you get a configuration file with the global [maxscale] section and at least one service defined, you can start the MaxScale service.

What are we going to be doing for setting up the MariaDB MaxScale?

  1. Add the MariaDB Official Repository;
  2. Create the needed users on the database servers (you will see soon that I’m considering you already a Master/Slave running environment);
  3. Create the .secrets file so we can encrypt the passwords for the users on the maxscale.cnf;
  4. Create a basic configuration file for MaxScale and start the process;
  5. Run dynamic commands so we can create a monitor (MariaDBMon), the servers, a listener, and link created servers with the monitor and the service.

Moving on…

By chance, you can be running something like below, which is going to give you the global configuration plus the ReadWtiteSplit Router configured as the service to be configured:

#: This is the basic configuration file we can get in place to start maxscale.
#: Notice that we need yet to come back soon to this file so we can add the service 
#: encrypted user password (attention to security matters, no clear text passwords, please)

[maxscale]
threads                     = auto
log_augmentation            = 1
ms_timestamp                = 1

[rwsplit-service]
type                        = service
router                      = readwritesplit
user                        = maxusr
password                    = <add your encrypted maxusr password here>
version_string              = 5.5.50-MariaDB #:should be used with all 10.1 servers and older

As we need SOP or Standard Operational Procedures for everything we perform, the documentation I have for setting up MaxScale considers to always have two users:

  • A service user: no matter how many services/routers you’re running on a MaxScale instance, you need to have at least one user set for the service. The defined user for service is the maxusr (yes, without the “e”, I didn’t forget that). Once you defined that user, you also need to create it on backends so the MaxScale Router/Service can connect to backends and forward queries. In this specific scenario, as we’re speaking about the ReadWriteSplit Router, writes will be sent to the master and reads will be sent to the slaves. You would like to check how the ReadWriteSplit Router Routing Decisions work so you can better design your applications;
  • A monitor user: monitors are modules that monitor the backends, and depending on what you’re running, you will use one monitor or another. If you running a replication cluster, regular GTID Master/Slaves replication, you want to use the MariaDBMon which is going to give you the automatic operations such as failover/rejoin and the manual possibility to perform a switchover;
  • A replication user: as we’re considering a replication cluster, or a yet simple master/slave scenario, we need to have a user so MaxScale can configure replication on database servers when needed. It happens when we configure MaxScale to execute a failover in case the master should crash, we run a manual switchover or yet, a rejoin needs to be executed when the old master comes back to the cluster now as a new slave/replica. If you don’t create a replication user when configuring the MariaDBMon, be aware that the user for your replication will be the one you defined to run the monitor per se, I personally don’t like that (don’t be lazy, 😉 ).

Let’s assume you have a simple Master/Slave already running, like below:

#: master/slave topology
MariaDB MaxScale Servers
--------------------------------------------------------
1. prod_maxscale01 (10.136.87.62/24 - Mode: Active)

MariaDB Servers Backends Cluster
--------------------------------------------------------
2. prod_mariadb01 (10.136.88.50/24 - master)
3. \__ prod_mariadb02 (10.136.69.104/24 - slave/replica)
4. \__ prod_mariadb03 (10.136.79.28/24  - slave/replica)

As a best practice, always configure the @@global.report_host on all database servers with their names:

prod-mariadb01 [(none)]> show slave hosts;
+-----------+----------------+------+-----------+
| Server_id | Host           | Port | Master_id |
+-----------+----------------+------+-----------+
|         3 | prod_mariadb03 | 3306 |         1 |
|         2 | prod_mariadb02 | 3306 |         1 |
+-----------+----------------+------+-----------+
2 rows in set (0.000 sec

Assuming the above MariaDB Servers Backends Cluster already have replication up and running (most of you have an environment like this one), you can just think about how we can add a MaxScale server in the middle of applications and your database servers. Most of the time I’m going to refer to database servers as backends as per the regular terminology we use after adding a Load Balancer to a database topology.

Doing a quick recap on where we are, we need now to create the users on the master, so, we can see users replicating to slaves and have the same data all around. Also, it’s good to have the @@global.gtid_strict_mode set on all the servers so we can keep the binary log files the same on all the servers (MaxScale also likes that).

Below we are creating the users as mentioned before, considering the backends we’re working with:

#: maxscale service user
CREATE USER 'maxusr'@'10.136.%' IDENTIFIED BY '123';
GRANT SELECT ON mysql.user TO 'maxusr'@'10.136.%';
GRANT SELECT ON mysql.db TO 'maxusr'@'10.136.%';
GRANT SELECT ON mysql.tables_priv TO 'maxusr'@'10.136.%';
GRANT SHOW DATABASES ON *.* TO 'maxusr'@'10.136.%';
GRANT SELECT ON mysql.roles_mapping TO maxusr@'10.136.%';

#: maxscale monitor user
CREATE USER 'maxmon'@'10.136.%' IDENTIFIED BY '321';
GRANT RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'maxmon'@'10.136.%';
GRANT CREATE, SELECT, UPDATE, INSERT, DELETE ON maxscale_schema.* TO 'maxmon'@'10.136.%';

#: replication users  - make sure the below user can connect
#: from all backends to all backends
CREATE USER mariadb@'10.136.%' IDENTIFIED BY '123';
GRANT RELOAD, REPLICATION SLAVE ON *.* TO mariadb@'10.136.%';

One additional detail here is that if you’re running the MariaDB Server 10.3.4 or you upgraded from an older version to a newer one, like MariaDB Server 10.3.5, as the user maxmon has the SUPER privilege, the DELETE HISTORY privilege will also be added to the list of GRANTS due to the fact that user should also be able to delete data from the System Versioned Tables.

After creating the above users, we need to go the prod_maxscale01 – 10.136.87.62 – as we need to set up the MariaDB Official Repository and setup MaxScale packages:

#: setting up the repository
root@prod-maxscale01:~# curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
[warning] Found existing file at /etc/apt/sources.list.d/mariadb.list. Moving to /etc/apt/sources.list.d/mariadb.list.old_1.
[info] Repository file successfully written to /etc/apt/sources.list.d/mariadb.list
[info] Adding trusted package signing keys...
Executing: /tmp/apt-key-gpghome.sDa0MNg3Md/gpg.1.sh --keyserver hkp://keys.gnupg.net:80 --recv-keys 0x8167EE24 0xE3C94F49 0xcbcb082a1bb943db 0xf1656f24c74cd1d8 0x135659e928c12247
gpg: key 135659E928C12247: "MariaDB Maxscale <maxscale@googlegroups.com>" not changed
gpg: key F1656F24C74CD1D8: 4 signatures not checked due to missing keys
gpg: key F1656F24C74CD1D8: "MariaDB Signing Key <signing-key@mariadb.org>" not changed
gpg: key CBCB082A1BB943DB: 32 signatures not checked due to missing keys
gpg: key CBCB082A1BB943DB: "MariaDB Package Signing Key <package-signing-key@mariadb.org>" not changed
gpg: key CE1A3DD5E3C94F49: 3 signatures not checked due to missing keys
gpg: key CE1A3DD5E3C94F49: "MariaDB Enterprise Signing Key <signing-key@mariadb.com>" not changed
gpg: key 70E4618A8167EE24: "MariaDBManager" not changed
gpg: Total number processed: 5
gpg:              unchanged: 5
Hit:1 http://security.debian.org stretch/updates InRelease
Ign:2 http://mirrors.digitalocean.com/debian stretch InRelease
Hit:3 https://repos.insights.digitalocean.com/apt/do-agent main InRelease
Get:4 http://mirrors.digitalocean.com/debian stretch-updates InRelease [91.0 kB]
Hit:5 http://downloads.mariadb.com/MariaDB/mariadb-10.4/repo/debian stretch InRelease
Hit:6 http://mirrors.digitalocean.com/debian stretch Release
Ign:7 http://downloads.mariadb.com/MaxScale/2.3/debian stretch InRelease
Hit:8 http://downloads.mariadb.com/Tools/debian stretch InRelease
Hit:10 http://downloads.mariadb.com/MaxScale/2.3/debian stretch Release
Hit:9 https://packagecloud.io/akopytov/sysbench/debian stretch InRelease
Fetched 91.0 kB in 0s (106 kB/s)
Reading package lists... Done
[info] Successfully added trusted package signing keys.

#: setting up packages
root@prod-maxscale01:~# apt install maxscale maxscale-experimental mariadb-client -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  maxscale maxscale-experimental
0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded.
Need to get 167 kB/31.1 MB of archives.
After this operation, 143 MB of additional disk space will be used.
Get:1 http://downloads.mariadb.com/MaxScale/2.3/debian stretch/main amd64 maxscale-experimental amd64 2.3.11 [167 kB]
Fetched 167 kB in 0s (337 kB/s)
Selecting previously unselected package maxscale.
(Reading database ... 35953 files and directories currently installed.)
Preparing to unpack .../maxscale_2.3.11_amd64.deb ...
Unpacking maxscale (2.3.11) ...
Selecting previously unselected package maxscale-experimental.
Preparing to unpack .../maxscale-experimental_2.3.11_amd64.deb ...
Unpacking maxscale-experimental (2.3.11) ...
Setting up maxscale (2.3.11) ...
Setting up maxscale-experimental (2.3.11) ...

Why am I setting up the mariadb-client package also? We need to test access from the MaxScale host to the backends so we can make sure MaxScale configured with the users we created will also be able to access backends. To catch permission or access denied when we execute queries is really bad as we need to recap on everything we did for the set up and it can take some time to review everything. We definitely don’t want that. Test access and move on.

Now, let’s create the .secrets and get an encryption version of users passwords:

#: create the .secrets file
root@prod_maxscale01:~# maxkeys
Generating .secrets file in /var/lib/maxscale.

#: generate the encrypted password for maxusr - this is the service user
#: you are going to need the below-encrypted password for the next question
root@prod-maxscale01:~# maxpasswd 123 #: maxusr
A0FE98035CFA5EB978337B739E949878

#: generate the encrypted password for maxmon - this is the monitor user
#: you are going to need the below-encrypted password on next labs
root@prod-maxscale01:~# maxpasswd 321 #: maxmon
AFB909850E7181E9906159CE45176FAD

#: generate the encrypted password for the mariadb replication user
root@prod-maxscale01:~# maxpasswd 123 #: mariadb
A0FE98035CFA5EB978337B739E949878

#: adjust permissions for the .secrets file
root@prod-maxscale01:~# chown maxscale:maxscale /var/lib/maxscale/.secrets

With encrypted passwords, we can create a basic configuration file. The below is your /etc/maxscale.cnf:

[maxscale]
threads                     = auto
log_augmentation            = 1
ms_timestamp                = 1
admin_host                  = 0.0.0.0
admin_port                  = 8989

[rwsplit-service]
type                        = service
router                      = readwritesplit
user                        = maxusr
password                    = A0FE98035CFA5EB978337B739E949878

Check if the maxscale.service is enabled so it can start with the OS boot and start it. Check the log file afterward:

#: starting up the maxscale.service
root@prod-maxscale01:~# systemctl --now enable maxscale.service

root@prod-maxscale01:~# tail -n30 /var/log/maxscale/maxscale.log
MariaDB MaxScale  /var/log/maxscale/maxscale.log  Mon Aug  5 12:25:54 2019
----------------------------------------------------------------------------
2019-08-05 12:25:54   notice : (mxb_log_set_syslog_enabled): syslog logging is enabled.
2019-08-05 12:25:54   notice : (mxb_log_set_maxlog_enabled): maxlog logging is enabled.
2019-08-05 12:25:54.078   notice : (mxb_log_set_highprecision_enabled): highprecision logging is enabled.
2019-08-05 12:25:54.078   notice : (config_load_global): Using up to 976.56KiB of memory for query classifier cache
2019-08-05 12:25:54.079   notice : (change_cwd): Working directory: /var/log/maxscale
2019-08-05 12:25:54.079   notice : (init_sqlite3): The collection of SQLite memory allocation statistics turned off.
2019-08-05 12:25:54.079   notice : (init_sqlite3): Threading mode of SQLite set to Multi-thread.
2019-08-05 12:25:54.080   notice : (main): MariaDB MaxScale 2.3.11 started (Commit: 36355922281a6820de63b76fb76c9203861e3988)
2019-08-05 12:25:54.080   notice : (main): MaxScale is running in process 13166
2019-08-05 12:25:54.080   notice : (main): Configuration file: /etc/maxscale.cnf
2019-08-05 12:25:54.080   notice : (main): Log directory: /var/log/maxscale
2019-08-05 12:25:54.081   notice : (main): Data directory: /var/lib/maxscale
2019-08-05 12:25:54.081   notice : (main): Module directory: /usr/lib/x86_64-linux-gnu/maxscale
2019-08-05 12:25:54.081   notice : (main): Service cache: /var/cache/maxscale
2019-08-05 12:25:54.082   notice : (load_module): Loaded module qc_sqlite: V1.0.0 from /usr/lib/x86_64-linux-gnu/maxscale/libqc_sqlite.so
2019-08-05 12:25:54.082   notice : (qc_setup): Query classification results are cached and reused. Memory used per thread: 976.56KiB
2019-08-05 12:25:54.083   notice : (init): The systemd watchdog is Enabled. Internal timeout = 30s
2019-08-05 12:25:54.083   notice : (config_load_single_file): Loading /etc/maxscale.cnf.
2019-08-05 12:25:54.084   notice : (is_directory): /etc/maxscale.cnf.d does not exist, not reading.
2019-08-05 12:25:54.084   notice : (mxs_get_module_object): Initializing statement-based read/write split router module.
2019-08-05 12:25:54.085   notice : (load_module): Loaded module readwritesplit: V1.1.0 from /usr/lib/x86_64-linux-gnu/maxscale/libreadwritesplit.so
2019-08-05 12:25:54.085   notice : (qc_sqlite_process_init): Statements that cannot be parsed completely are logged.
2019-08-05 12:25:54.086   notice : (service_launch_all): Starting a total of 1 services...
2019-08-05 12:25:54.086   warning: (serviceStartAllPorts): Service 'rwsplit-service' has no listeners defined.
2019-08-05 12:25:54.086   notice : (service_launch_all): Service 'rwsplit-service' started (1/1)
2019-08-05 12:25:54.086   notice : (main): Started REST API on [0.0.0.0]:8989
2019-08-05 12:25:54.086   notice : (main): MaxScale started with 1 worker threads, each with a stack size of 8388608 bytes.
2019-08-05 12:25:54.090   notice : (hkthread): Housekeeper thread started.

Now you have MaxScale up and running! It’s time for testing the maxusr and maxmon connectivity with backends:

#: service user access test
root@prod-maxscale01:~# mysqladmin -u maxusr -p123 -h 10.136.88.50 ping
mysqld is alive
root@prod-maxscale01:~# mysqladmin -u maxusr -p123 -h 10.136.69.104 ping
mysqld is alive
root@prod-maxscale01:~# mysqladmin -u maxusr -p123 -h 10.136.79.28 ping
mysqld is alive

#: monitor user access test
root@prod-maxscale01:~# mysqladmin -u maxmon -p321 -h 10.136.88.50 ping
mysqld is alive
root@prod-maxscale01:~# mysqladmin -u maxmon -p321 -h 10.136.69.104 ping
mysqld is alive
root@prod-maxscale01:~# mysqladmin -u maxmon -p321 -h 10.136.79.28 ping
mysqld is alive

A final test to make sure everything is really set is to test is you can access all databases from all databases. I don’t wanna go over the 9! test here, but, it’s good you go over it and make sure the replication user can access all from all, as replication will be set by MaxScale and you don’t want to have access denied reported by the IO_THREAD for a new slave configured by MaxScale.

At this point, we have MaxScale running with a basic configuration file and also, basic settings for the ReadWriteSplit Router, which is the only service we have running on MaxScale currently. You can notice that we have configurations on the global [maxscale] section for making it possible to remote access MaxScale using the MaxCtrl. Here I’m not considering MaxAdmin as it’s deprecated on MaxScale 2.3 and will be removed on MaxScale 2.4; this latter, currently in beta.

Currently, you can use MaxCtrl to retrieve basic information like below:

#: maxscale global configurations
root@prod-maxscale01:~# maxctrl show maxscale
┌──────────────┬──────────────────────────────────────────────────────────────────────┐
│ Version      │ 2.3.11                                                               │
├──────────────┼──────────────────────────────────────────────────────────────────────┤
│ Commit       │ 36355922281a6820de63b76fb76c9203861e3988                             │
├──────────────┼──────────────────────────────────────────────────────────────────────┤
│ Started At   │ Mon, 05 Aug 2019 12:25:54 GMT                                        │
├──────────────┼──────────────────────────────────────────────────────────────────────┤
│ Activated At │ Mon, 05 Aug 2019 12:25:54 GMT                                        │
├──────────────┼──────────────────────────────────────────────────────────────────────┤
│ Uptime       │ 17863                                                                │
├──────────────┼──────────────────────────────────────────────────────────────────────┤
│ Parameters   │ {                                                                    │
│              │     "libdir": "/usr/lib/x86_64-linux-gnu/maxscale",                  │
│              │     "datadir": "/var/lib/maxscale",                                  │
│              │     "process_datadir": "/var/lib/maxscale/data13166",                │
[...snip...]
│              │     "admin_auth": true,                                              │
│              │     "admin_enabled": true,                                           │
│              │     "admin_log_auth_failures": true,                                 │
│              │     "admin_host": "0.0.0.0",                                         │
│              │     "admin_port": 8989,                                              │
│              │     "admin_ssl_key": "",                                             │
│              │     "admin_ssl_cert": "",                                            │
│              │     "admin_ssl_ca_cert": "",                                         │
│              │     "passive": false,                                                │
[...snip...]
│              │     "load_persisted_configs": true                                   │
│              │ }                                                                    │
└──────────────┴──────────────────────────────────────────────────────────────────────┘

There are lots os commands to retrieve information from MaxScale using the MaxCtrl in communication to the REST API, running now listening on MaxScale host on any interface, as per the `admin_host` variable and on port 8989, as the what is defined at `admin_port`. As we tested the communication with the MaxScale, we can use now MaxCtrl to create the needed objects like a monitor and a listener, add the servers and link them to the monitor and the service. Since MaxScale 2.1 we have a way to do this with Dynamic Commands; the results of the commands will be to create objects and persist them in files created at the –persistdir. Additionally, a journal file will be also kept at the MaxScale –datadir so it can keep track of the current status of the backends in case the maxscale.service is restarted.

Let’s create the cluster, follow the below commands so we can do that:

#: ReadWriteSplit setup Using Dynamic Commands
#: Created by Wagner Bianchi <bianchi@mariadb.com>
#: task: creating the monitor
maxctrl create monitor replication-monitor mariadbmon --monitor-user=maxmon --monitor-password=AFB909850E7181E9906159CE45176FAD replication_user=mariadb replication_password=A0FE98035CFA5EB978337B739E949878

#: task: configuring the monitor for the replication cluster
maxctrl alter monitor replication-monitor monitor_interval          1000 
maxctrl alter monitor replication-monitor failcount                 3 
maxctrl alter monitor replication-monitor auto_failover             true 
maxctrl alter monitor replication-monitor auto_rejoin               true
maxctrl alter monitor replication-monitor enforce_read_only_slaves  true

#: task: create a listener
maxctrl create listener rwsplit-service replication-rwsplit-listener 3306

#: task: create servers
maxctrl create server prod_mariadb01 10.136.88.50  3306
maxctrl create server prod_mariadb02 10.136.69.104 3306
maxctrl create server prod_mariadb03 10.136.79.28  3306

#: task: link servers with the service
maxctrl link service rwsplit-service prod_mariadb01
maxctrl link service rwsplit-service prod_mariadb02
maxctrl link service rwsplit-service prod_mariadb03

#: task: link servers with the monitor
maxctrl link monitor replication-monitor prod_mariadb01
maxctrl link monitor replication-monitor prod_mariadb02
maxctrl link monitor replication-monitor prod_mariadb03

If you executed the above commands while tailing the maxscale.log, you would see many interesting things. At this point, it’s ready to go:

root@prod-maxscale01:~# maxctrl list servers
┌────────────────┬───────────────┬──────┬─────────────┬─────────────────┬──────────────┐
│ Server         │ Address       │ Port │ Connections │ State           │ GTID         │
├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────────┤
│ prod_mariadb03 │ 10.136.79.28  │ 3306 │ 0           │ Slave, Running  │ 0-1-3        │
├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────────┤
│ prod_mariadb02 │ 10.136.69.104 │ 3306 │ 0           │ Slave, Running  │ 0-1-3        │
├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────────┤
│ prod_mariadb01 │ 10.136.88.50  │ 3306 │ 0           │ Master, Running │ 0-1-3        │
└────────────────┴───────────────┴──────┴─────────────┴─────────────────┴──────────────┘

To conclude this blog post, we can do a final test, a sysbench on maxscale server:

root@prod-maxscale01:~# mysql -u bianchi -p123 -h 10.136.87.62 -e "create database maxscaledb" -vvv
--------------
create database maxscaledb
--------------

Query OK, 1 row affected (0.002 sec)

Bye

root@prod-maxscale01:~# sysbench --test=/usr/share/sysbench/oltp_read_write.lua --table_size=10000 --mysql-db=maxscaledb --tables=20 --mysql-user=bianchi --mysql-password=123 --mysql-port=3306 --mysql-host=10.136.87.62 --db-driver=mysql --threads=32 --events=0 --time=60 --rand-type=uniform --report-interval=1 prepare
sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2)

Initializing worker threads...

Creating table 'sbtest15'...
Creating table 'sbtest17'...
Creating table 'sbtest16'...
Creating table 'sbtest14'...
[...snip...]
Creating a secondary index on 'sbtest19'...
Creating a secondary index on 'sbtest20'...
Creating a secondary index on 'sbtest16'...
Creating a secondary index on 'sbtest11'...
Creating a secondary index on 'sbtest14'...

root@prod-maxscale01:~# sysbench --test=/usr/share/sysbench/oltp_read_write.lua --table_size=10000 --mysql-db=maxscaledb --tables=20 --mysql-user=bianchi --mysql-password=123 --mysql-port=3306 --mysql-host=10.136.87.62 --db-driver=mysql --threads=32 --events=0 --time=60 --rand-type=uniform --report-interval=1 run &
[1] 15656

root@prod-maxscale01:~# maxctrl list servers
┌────────────────┬───────────────┬──────┬─────────────┬─────────────────┬──────────┐
│ Server         │ Address       │ Port │ Connections │ State           │ GTID     │
├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────┤
│ prod_mariadb03 │ 10.136.79.28  │ 3306 │ 32          │ Slave, Running  │ 0-1-8144 │
├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────┤
│ prod_mariadb01 │ 10.136.88.50  │ 3306 │ 32          │ Master, Running │ 0-1-8144 │
├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────┤
│ prod_mariadb02 │ 10.136.69.104 │ 3306 │ 32          │ Slave, Running  │ 0-1-8144 │
└────────────────┴───────────────┴──────┴─────────────┴─────────────────┴──────────┘

Next blog, I will carry on with this same environment as per what was defined here, I will test failover, switchover, and rejoin.

AddThis Social Bookmark Button

MySQL InnoDB Cluster, now with remote nodes!

setembro 25th, 2016 Bianchi Posted in MySQL Tuning No Comments »

screen-shot-2016-09-25-at-10-48-55-amIn this post I’m going to extend the tests I made with MySQL InnoDB Cluster on the previous post, creating a group of instances with separate servers, that is, I’m going to test how to create a new cluster with three different machines considering that, if you create a cluster using one giant server, maybe it may be considered a big single point of failure in case this giant server crashes and all cluster’s members crashes altogether.

In this case, we know that, to prevent that situation is something that is part of any project using a database which principle is to scale-out in order to attend more and more data requests. This is a subject for another blog in which we can discuss the main strategies to slave writes and reads and go beyond of the scope of this current post.

I’m going to concentrate here in creating the cluster with 3 machines, I’m using vagrant to create them and the following is the script that will create the virtual machines:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# -*- mode: ruby -*-
# vi: set ft=ruby :
 
VAGRANTFILE_API_VERSION = "2"
 
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.define "box01" do |box01|
	box01.vm.hostname="box01"
	box01.vm.box = "centos7.0_x86_64"
	box01.vm.network "private_network", ip: "192.168.50.11", virtualbox__intnet: "mysql_innodb_cluster"
  end
 
  config.vm.define "box02" do |box02|
	box02.vm.hostname="box02"
        box02.vm.box = "centos7.0_x86_64"
        box02.vm.network "private_network", ip: "192.168.50.12", virtualbox__intnet: "mysql_innodb_cluster"
  end
 
  config.vm.define "box03" do |box03|
        box03.vm.hostname="box03"
        box03.vm.box = "centos7.0_x86_64"
        box03.vm.network "private_network", ip: "192.168.50.13", virtualbox__intnet: "mysql_innodb_cluster"
  end
end

I’m considering the you have added a CentOS 7 image to your local vagrant boxes library and that you’re using the VirtualBox hypervisor driver to create virtual machines. If there is something different than this on your setup, maybe the above script won’t work as expected. Below, machines are running:

wagnerbianchi01-3:mysql_innodb_cluster01 root# vagrant status
Current machine states:
box01                     running (virtualbox)
box02                     running (virtualbox)
box03                     running (virtualbox)

With that, we can start configuring the servers in order to create the cluster. Basically, the steps are like below:

1. Setup all packages on all three servers

On the first server, install all packages including the router one as we are going to bootstrap it on that node. You don’t need to install MySQL Router package on the other two nodes as it’s not needed there. MySQL Shell should be installed on all three nodes. So, below I show you what packages I installed on each of the nodes:

#: box01
  mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-router.x86_64 0:2.1.0-0.1.labs.el7
  mysql-router-debuginfo.x86_64 0:2.1.0-0.1.labs.el7
  mysql-shell.x86_64 0:1.0.5-0.1.labs.el7
  mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7
 
#: box02
  mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-shell.x86_64 0:1.0.5-0.1.labs.el7
  mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7
 
#: box03
  mysql-community-client.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-common.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-devel.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-libs.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-libs-compat.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-community-server.x86_64 0:5.7.15-1.labs_gr090.el7
  mysql-shell.x86_64 0:1.0.5-0.1.labs.el7
  mysql-shell-debuginfo.x86_64 0:1.0.5-0.1.labs.el7

To grab all these packages for your testes, click here (http://downloads.mysql.com/snapshots/pb/mysql-innodb-cluster-5.7.15-preview/mysql-innodb-cluster-labs201609-el7-x86_64.rpm.tar.gz)

2. Add the correct configs/setting to mysql configuration file aka my.cnf:

[root@box01 mysql]# cat /etc/my.cnf
[mysqld]
user=mysql
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
 
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
 
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
 
#: innodb cluster configs
server_id=1
binlog_checksum=none
enforce_gtid_consistency=on
gtid_mode=on
log_bin
log_slave_updates
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction_write_set_extraction=XXHASH64

Make sure you restart mysqld in case you add new configs after having it initialized to have above variables in effect.

3. Initialize mysqld (using the –initialize-insecure and restart service):

[root@box01 ~]# mysqld --initialize-insecure
[root@box01 mysql]# ls -lh
insgesamt 109M
-rw-r----- 1 mysql mysql   56 24. Sep 16:23 auto.cnf
-rw-r----- 1 mysql mysql  169 24. Sep 16:23 box01-bin.000001
-rw-r----- 1 mysql mysql   19 24. Sep 16:23 box01-bin.index
-rw-r----- 1 mysql mysql  413 24. Sep 16:23 ib_buffer_pool
-rw-r----- 1 mysql mysql  12M 24. Sep 16:23 ibdata1
-rw-r----- 1 mysql mysql  48M 24. Sep 16:23 ib_logfile0
-rw-r----- 1 mysql mysql  48M 24. Sep 16:23 ib_logfile1
drwxr-x--- 2 mysql mysql 4,0K 24. Sep 16:23 mysql
drwxr-x--- 2 mysql mysql 8,0K 24. Sep 16:23 performance_schema
drwxr-x--- 2 mysql mysql 8,0K 24. Sep 16:23 sys
[root@box01 mysql]# systemctl restart mysqld.service
[root@box01 mysql]# systemctl status mysqld.service
mysqld.service - MySQL Server
   Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled)
   Active: active (running) since Sa 2016-09-24 16:25:13 CEST; 6s ago
  Process: 17112 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
  Process: 17095 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
 Main PID: 17116 (mysqld)
   CGroup: /system.slice/mysqld.service
           └─17116 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid
 
Sep 24 16:25:12 box01 systemd[1]: Starting MySQL Server...
Sep 24 16:25:13 box01 systemd[1]: Started MySQL Server.

4. Configure the password for root@‘%’ giving the GRANT OPTIONS for this user:

In this step you need to work on giving the right privileges for the root@‘%’ and configure a password for this user which will be used soon to complete the setup. In the next steps which is the verify and validate the instance, you will be prompted this root@‘%’ password, so, follow the below steps on all three nodes:

#: create and configure the root@‘%'
mysql> grant all on *.* to root@'%' identified by 'bianchi' with grant option;
Query OK, 0 rows affected, 1 warning (0,00 sec) -- don’t worry about this warning
 
#: configure the password for root@localhost
mysql> set password='bianchi';
Query OK, 0 rows affected (0,00 sec)
 
#: in any case, flush grants tables
mysql> flush privileges;
Query OK, 0 rows affected (0,00 sec)

5. Validate instances, this is done accessing the MySQL Shell on all the three nodes and run the below command:

mysql-js> dba.validateInstance('root@localhost:3306')
Please provide a password for 'root@localhost:3306':
Validating instance...
 
Running check command.
Checking Group Replication prerequisites.
* Comparing options compatibility with Group Replication... PASS
Server configuration is compliant with the requirements.
* Checking server version... PASS
Server is 5.7.15
 
* Checking that server_id is unique... PASS
The server_id is valid.
 
* Checking compliance of existing tables... PASS
 
The instance: localhost:3306 is valid for Cluster usage

At this point in which we’re going to start accessing instances all around, make sure you configure iptables appropriately or even, just flush all the configured chains on that in order to avoid the below message when accessing remote nodes:

[root@box01 mysql]# mysql -u root -p -h box02
Enter password:
ERROR 2003 (HY000): Can't connect to MySQL server on 'box02' (113)
 
[root@box02 ~]# iptables -F
[root@box02 ~]# systemctl firewalld stop
 
[root@box01 mysql]# mysql -u root -p -h box02
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.15-labs-gr090-log MySQL Community Server (GPL)
 
Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql> \q
Bye

6. At this point, we need to create a cluster:

Let’s use box01 as the server in which we will create the cluster and bootstrap it, creating all the cluster’s metadata.

#: create the cluster on box01
[root@box01 mysql]# mysqlsh
Welcome to MySQL Shell 1.0.5-labs Development Preview
 
Copyright (c) 2016, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type '\help', '\h' or '\?' for help, type '\quit' or '\q' to exit.
 
Currently in JavaScript mode. Use \sql to switch to SQL mode and execute queries.
mysql-js> \c root@localhost:3306
Creating a Session to 'root@localhost:3306'
Enter password:
Classic Session successfully established. No default schema selected.
 
mysql-js> cluster = dba.createCluster('wbCluster001')
A new InnoDB cluster will be created on instance 'root@localhost:3306'.
 
When setting up a new InnoDB cluster it is required to define an administrative
MASTER key for the cluster. This MASTER key needs to be re-entered when making
changes to the cluster later on, e.g.adding new MySQL instances or configuring
MySQL Routers. Losing this MASTER key will require the configuration of all
InnoDB cluster entities to be changed.
 
Please specify an administrative MASTER key for the cluster 'wbCluster001':
Creating InnoDB cluster 'wbCluster001' on 'root@localhost:3306'...
Adding Seed Instance...
 
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
 
mysql-js>

Now we can use the the value we stored on the variable cluster to exhibit the status of the just created cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mysql-js> cluster.status()
{
    "clusterName": "wbCluster001",
    "defaultReplicaSet": {
        "status": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "localhost:3306": {
                "address": "localhost:3306",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {}
            }
        }
    }
}

Cluster status at this point shows that it’s not fault tolerant due to don’t have any other node as part of the cluster wbCluster001. Another thing I verified here and it was present on the scenario of the previous post as well, is that the metadata is created on some tables on the database schema called mysql_innodb_cluster_metadata, added to the instance used to create the cluster and that will be the instance to manage the cluster.

#: box01, the instance used as the cluster’s seed
mysql> use mysql_innodb_cluster_metadata
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> show tables;
+-----------------------------------------+
| Tables_in_mysql_innodb_cluster_metadata |
+-----------------------------------------+
| clusters                                |
| hosts                                   |
| instances                               |
| replicasets                             |
| schema_version                          |
+-----------------------------------------+
5 rows in set (0,00 sec)
 
mysql> select cluster_id,cluster_name from mysql_innodb_cluster_metadata.clusters\G
*************************** 1. row ***************************
  cluster_id: 1
cluster_name: wbCluster001
1 row in set (0,00 sec)

7. Adding instances to the cluster:

By now, what we need to do is to start adding the instances we setup on our existing cluster and to do that, in case you don’t have the cluster’s name on cluster variable anymore, you can use mysqlsh, connect to the instance running on box01:3306 and user the dba.getCluster(‘wbCluster001’) again. After doing that, you can move forward an execute the below addInstances() methods to add instances box02,box03 to the existing cluster.

mysql-js> \c root@192.168.50.11:3306
Creating a Session to 'root@192.168.50.11:3306'
Enter password:
Classic Session successfully established. No default schema selected.
mysql-js> cluster = dba.getCluster('wbCluster001')
When the InnoDB cluster was setup, a MASTER key was defined in order to enable
performing administrative tasks on the cluster.
 
Please specify the administrative MASTER key for the cluster 'wbCluster001':
<Cluster:wbCluster001>
 
#: adding box02
mysql-js> cluster.addInstance('root@192.168.50.12:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@192.168.50.12:3306':
Adding instance to the cluster ...
 
The instance 'root@192.168.50.12:3306' was successfully added to the cluster.
 
#: adding box03
mysql-js> cluster.addInstance('root@192.168.50.13:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@192.168.50.13:3306':
Adding instance to the cluster ...
 
The instance 'root@192.168.50.13:3306' was successfully added to the cluster.

At this point, configuring exactly the way you’re reading above, I saw the error logs on both joiner nodes, box02 and box03, the following messages:

2016-09-25T00:34:11.285509Z 61 [ERROR] Slave I/O for channel 'group_replication_recovery': error connecting to master 'mysql_innodb_cluster_rpl_user@box01:3306' - retry-time: 60  retries: 1, Error_code: 2005
2016-09-25T00:34:11.285535Z 61 [Note] Slave I/O thread for channel 'group_replication_recovery' killed while connecting to master
2016-09-25T00:34:11.285539Z 61 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2016-09-25T00:34:11.285963Z 48 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Check group replication recovery's connection credentials.'
2016-09-25T00:34:11.286204Z 48 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 8/10’

While more and more errors due to connection between joiner and donor were added to the error log, I added to all boxes some entries on /etc/hosts and than, the issue was fixed. So, this is very important to consider the configuration below added to the machines’ hosts file to server as a DNS resolver. If you don’t do that, when you check the cluster.status(), it’s going to report that the joiner db node is in RECOVERY MODE as box03 or 192.168.50.13:3306 below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
mysql-js> cluster.status()
{
    "clusterName": "wbCluster001",
    "defaultReplicaSet": {
        "status": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "192.168.50.11:3306": {
                "address": "192.168.50.11:3306",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "192.168.50.12:3306": {
                        "address": "192.168.50.12:3306",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "192.168.50.13:3306": {
                        "address": "192.168.50.13:3306",
                        "status": "RECOVERING”,
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                }
            }
        }
    }
}

As many attempts were done while I was fixing the problem related to the hosts file, I had to do a cluster.rejoinInstance for box03, as you can see below:

mysql-js> cluster.rejoinInstance('root@192.168.50.13:3306')
Please provide the password for 'root@192.168.50.13:3306':
The instance will try rejoining the InnoDB cluster. Depending on the original
problem that made the instance unavailable the rejoin, operation might not be
successful and further manual steps will be needed to fix the underlying
problem.
 
Please monitor the output of the rejoin operation and take necessary action if
the instance cannot rejoin.
Enter the password for server (root@192.168.50.13:3306):
Enter the password for replication_user (mysql_innodb_cluster_rpl_user):
Enter the password for peer_server (root@192.168.50.12:3306):
 
Running join command on '192.168.50.13@3306'.
 
Running health command on '192.168.50.13@3306'.
Group Replication members:
  - Host: box03
    Port: 3306
    State: ONLINE
  - Host: box02
    Port: 3306
    State: ONLINE
  - Host: box01
    Port: 3306
    State: ONLINE

So, at this point, the cluster is OK, all three nodes running well and fine:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#: describe cluster
mysql-js> cluster.describe()
{
    "clusterName": "wbCluster001",
    "adminType": "local",
    "defaultReplicaSet": {
        "name": "default",
        "instances": [
            {
                "name": "192.168.50.11:3306",
                "host": "192.168.50.11:3306",
                "role": "HA"
            },
            {
                "name": "192.168.50.12:3306",
                "host": "192.168.50.12:3306",
                "role": "HA"
            },
            {
                "name": "192.168.50.13:3306",
                "host": "192.168.50.13:3306",
                "role": "HA"
            }
        ]
    }
}
#: cluster status
 
mysql-js> cluster.status()
{
    "clusterName": "wbCluster001",
    "defaultReplicaSet": {
        "status": "Cluster is tolerant to 2 failures.",
        "topology": {
            "192.168.50.11:3306": {
                "address": "192.168.50.11:3306",
                "status": "ONLINE",
                "role": "HA",
                "mode": "R/W",
                "leaves": {
                    "192.168.50.12:3306": {
                        "address": "192.168.50.12:3306",
                        "status": "ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    },
                    "192.168.50.13:3306": {
                        "address": "192.168.50.13:3306",
                        "status": “ONLINE",
                        "role": "HA",
                        "mode": "R/O",
                        "leaves": {}
                    }
                }
            }
        }
    }
}

After solving the issues above mentioned, I saw the following events added to the error log on box02 and box03:

#: box02
2016-09-26T14:07:02.432632Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
 
#: box03
2016-09-26T14:14:52.432632Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

At the end, you can check that the MySQL Group Replication is the underlying feature that empower MySQL InnoDB Cluster. On box01, or, 192.168.50.11:3306:

mysql-sql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | b0b1603f-83ef-11e6-85a6-080027de0e0e | box01       |        3306 | ONLINE       |
| group_replication_applier | bb29750c-83ef-11e6-8b4f-080027de0e0e | box02       |        3306 | ONLINE       |
| group_replication_applier | bbu3761b-83ef-11e6-894c-080027de0t0e | box03       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)

Next time, I’m going to bootstrap the router to show some tests related to the routing connections out of failed nodes. The final considerations over this new way to provide HA to an environment using InnoDB are, there is no documentation enough yet regrading the exiting methods to manipulate instances within the cluster, in case you need to take one off, restart it or even get to know why they are OFFLINE, I haven’t found yet a way to better manipulate nodes but add them to the cluster. This is not GA, the feature was just released, to me it’s very promising and will make it easier to add clusters and I expect to see more and more about this. Once again, great job Oracle MySQL Team, let’s move on!!

You can find more resources on below links:

– http://mysqlserverteam.com/introducing-mysql-innodb-cluster-a-hands-on-tutorial/
– http://mysqlserverteam.com/introducing-mysql-innodb-cluster-mysql-ha-out-of-box-easy-to-use-high-availability/

Arrivederci!!

AddThis Social Bookmark Button

MySQL 8.0 DMR, new features, part 1

setembro 12th, 2016 Bianchi Posted in MySQL Tuning No Comments »

I would like to start this telling the reader that this is going to be the first of some blog posts I’m planning to exploit subjects around MySQL 8.0, as I have been testing its features. As I’m an Oracle ACE Director, part of the Oracle ACEs program, I received from my friend Fred Deschamps, currently the Oracle community Manager for MySQL, the early access to the binary as well as a briefing of the new features, changes and deprecations. I would like to say that I’ve got pretty excited with many of the coming features and changes for existing features available on 5.6/5.7 and I’m going to write more about some of the hot topics published here by Oracle MySQL 8.0. Just for the records and just in case you get curious, the operating system I’m using for this and other blog posts related to MySQL 8.0 is CentOS 7 with Kernel 3.10.0-229.el7.x86_64.

Current status of mysql.service:

[root@mysql80drm1 vagrant]# systemctl status mysqld.service
● mysqld.service - MySQL Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2016-08-28 01:51:51 CEST; 2s ago
Process: 16304 ExecStart=/usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid $MYSQLD_OPTS (code=exited, status=0/SUCCESS)
Process: 16229 ExecStartPre=/usr/bin/mysqld_pre_systemd (code=exited, status=0/SUCCESS)
Main PID: 16307 (mysqld)
CGroup: /system.slice/mysqld.service
└─16307 /usr/sbin/mysqld --daemonize --pid-file=/var/run/mysqld/mysqld.pid

Aug 28 01:51:46 mysql80drm1 systemd[1]: Starting MySQL Server...
Aug 28 01:51:51 mysql80drm1 systemd[1]: Started MySQL Server.

As expected behavior since MySQL 5.7.6, the initial root account temporary password is generated on error log and must be changed on the first access as that temporary password is set as expired. Due to password validation plugin be enabled by default, you need to chose a good password to be able to change the root account one. Mine is P@ssw0rd to streamline it at this point.

[root@mysql80drm1 vagrant]# cat /var/log/mysqld.log | egrep "A temporary password is generated for root@localhost"
2016-08-27T23:51:47.582177Z 4 [Note] A temporary password is generated for root@localhost: aLpaL<?3p>T=

[root@mysql80drm1 vagrant]# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 8.0.0-dmr

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
 affiliates. Other names may be trademarks of their respective
 owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> ALTER USER root@localhost IDENTIFIED BY 'P@ssw0rd';
 Query OK, 0 rows affected (0.00 sec)
mysql> \q
 Bye

MySQL 8.0 DMR 1 – Transaction Data Dictionary

When I started reading the document shared by Oracle for ACE regarding the coming changes for MySQL 8.0, I need to re-read it to really believe that the old I_S has gone. Additionally, all those files containing metadata persisted on disk do not exists anymore, so, “the FRM, TRG, PAR files are gone”.

mysql> create database wb;
 Query OK, 1 row affected (0.00 sec)
mysql> use wb;
 Database changed
mysql> \! ls -lh /var/lib/mysql/wb
 total 144K
 -rw-r----- 1 mysql mysql 144K Sep 11 02:07 t1.ibd

This a very good step for the product as we can now use I_S tables as the queries go to the same execution path as normal queries and not gather all the needed data on the query time or generate lots of disk seeks to responde to a query. Many blogs were written along the time since the mysql’s data dictionary appeared (http://www.technocation.org/content/how-tell-when-using-informationschema-might-crash-your-database-0).

The case is that, the current implementation of I_S is not useful when dealing with big instances, having lots of objects. As more objects you have in a mysql instance, as more risky become the queries against the data dictionary and this is one of the benefits I can see at this moment, as, when having I_S as Views, it’s going to improve the speed and make it stable when querying tables. Yet, about the new Data Dictionary, it’s good to have a transactional control, where reads completes independent of writes going on like DDL statements altering columns, for example. More information about this, http://mysqlserverteam.com/a-new-data-dictionary-for-mysql/. Morgan has written about the FRM files going away some time ago, http://www.tocker.ca/2014/07/30/beyond-the-frm-ideas-for-a-native-mysql-data-dictionary.html

If we compare the number of tables contained on the INFORMATION_SCHEMA between MySQL 5.7 and the 8.0, the latter has currently 6 additional tables. These tables on the new version will become VIEWS from the underlying tables that stores data in a dedicated dictionary tablespace and queries requesting metadata will go through the same process as any other regular query. Below we can see the New MySQL Data Dictionary architecture:

New Data Dictionary

Current DMR documentation compares what’;s avaulable on 5.7 and what’s coming with the New Data Dictionary on 8.0. Basically, 5.7 has all the .frm files for tables yet on disk in a persistent way. what was said to be an approximation of a data dictionary, but not yet centralized in one place. MySQL 8.0 has an explicit definition of what the data dictionary is and is not, namely an identified set of metadata tables stored in transactional storage (InnoDB). Some additional features can come soon regarding the names of the files as the engineers are thinking to use some internal identifiers to the file names, which will impact on the usage of a filename-safe encoding introduced on MySQL 5.1 which means that the “table name” that the storage engine gets is not the original table name, it is converted to be a safe filename. All the “troublesome” characters are encoded. You can check more about the assumptions about the schema definitions names clicking here, WL#6379. Yet on the same link, once can see the new tables’ definition.

We can think that, when one need to alter a column data type or even rebuild a table, data dictionary should be  accessible for reads and writes at the same time that other users are running some online schema changes. And this is the name of the new feature, Transactional Data Dictionary. I_S queries will run and be executed under different isolation level set by the user.

At the end, this is of a big benefit for DBAs that uses I_S as target of many scripts, having it as an impossible strategy due to the big number of objects on existing databases. I use to work daily with some customers that it’s prohibited to query I_S during business hours as it can crash the instance. I’m very happy to get this feature on MySQL 8.0 where I_S is now VIEWs of metadata tables and temporary tables and and preparation of TABLE_SHARE object upon every query execution; we know very well what is that, the scan of many files on disk to gather all the needed data to deliver result to the requester.

MySQL 8.0 DMR 1 – Invisible Indexes

One of the features that will add a good strategy to the sauce when you think about design review, focusing queries and table’s indexes is the Invisible Indexes, as a index can be marked as Visible or Invisible, being considered or not by the optimizer on query’s execution. As said on the DMR 1 docs, it should be a good topic to be considered when making a query more efficient. Below you can see things in action, considering the comments  for each row:

mysql> show tables;
+--------------+
| Tables_in_wb |
+--------------+
| t1 |
+--------------+
1 row in set (0.00 sec)
mysql> show create table t1;
+-------+----------------------------------------------------------+
| Table | Create Table |
+-------+----------------------------------------------------------+
| t1 | CREATE TABLE `t1` (
 `i` int(11) DEFAULT NULL,
 KEY `i` (`i`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+-------+----------------------------------------------------------+
1 row in set (0.00 sec)
mysql> explain select i from t1 use index(i)\G
*************************** 1. row ***************************
 id: 1
 select_type: SIMPLE
 table: t1
 partitions: NULL
 type: index
possible_keys: NULL
 key: i
 key_len: 5
 ref: NULL
 rows: 1
 filtered: 100.00
 Extra: Using index
1 row in set, 1 warning (0.00 sec)
mysql> show index from t1\G
*************************** 1. row ***************************
 Table: t1
 Non_unique: 1
 Key_name: i
 Seq_in_index: 1
 Column_name: i
 Collation: A
 Cardinality: NULL
 Sub_part: NULL
 Packed: NULL
 Null: YES
 Index_type: BTREE
 Comment:
Index_comment:
 Visible: YES
1 row in set (0.01 sec)

We can make the above index invisible:

mysql> alter table t1 alter index i invisible;
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> show index from t1\G
*************************** 1. row ***************************
 Table: t1
 Non_unique: 1
 Key_name: i
 Seq_in_index: 1
 Column_name: i
 Collation: A
 Cardinality: NULL
 Sub_part: NULL
 Packed: NULL
 Null: YES
 Index_type: BTREE
 Comment:
Index_comment:
 Visible: NO
1 row in set (0.01 sec)

mysql> explain select i from t1 use index(i)\G
*************************** 1. row ***************************
 id: 1
 select_type: SIMPLE
 table: t1
 partitions: NULL
 type: ALL
possible_keys: NULL
 key: NULL
 key_len: NULL
 ref: NULL
 rows: 1
 filtered: 100.00
 Extra: NULL
1 row in set, 1 warning (0.00 sec)

Here, with this feature, you don’t need to remove an index to test queries, in case you think an index is a duplicate one as you can just make it visible or invisible.

MySQL 8.0 DMR 1 – MySQL System Database now in InnoDB

This work has started with MySQL 5.7 and now, they announced that this is completed. It was one of the most expected things on MySQL to make it full transactional and say a bye-bye to MyISAM. All the tables as below are in InnoDB, with the exception of the general and slow logs, that could impact server, writing too much data.

mysql> SELECT TABLE_SCHEMA,TABLE_NAME,ENGINE 
       FROM INFORMATION_SCHEMA.TABLES 
       WHERE TABLE_SCHEMA='mysql'\G
*************************** 1. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: column_stats
 ENGINE: InnoDB
*************************** 2. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: columns_priv
 ENGINE: InnoDB
*************************** 3. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: component
 ENGINE: InnoDB
*************************** 4. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: db
 ENGINE: InnoDB
*************************** 5. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: default_roles
 ENGINE: InnoDB
*************************** 6. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: engine_cost
 ENGINE: InnoDB
*************************** 7. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: func
 ENGINE: InnoDB
*************************** 8. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: general_log
 ENGINE: CSV
*************************** 9. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: gtid_executed
 ENGINE: InnoDB
*************************** 10. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: help_category
 ENGINE: InnoDB
*************************** 11. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: help_keyword
 ENGINE: InnoDB
*************************** 12. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: help_relation
 ENGINE: InnoDB
*************************** 13. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: help_topic
 ENGINE: InnoDB
*************************** 14. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: innodb_index_stats
 ENGINE: InnoDB
*************************** 15. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: innodb_table_stats
 ENGINE: InnoDB
*************************** 16. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: plugin
 ENGINE: InnoDB
*************************** 17. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: procs_priv
 ENGINE: InnoDB
*************************** 18. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: proxies_priv
 ENGINE: InnoDB
*************************** 19. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: role_edges
 ENGINE: InnoDB
*************************** 20. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: server_cost
 ENGINE: InnoDB
*************************** 21. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: servers
 ENGINE: InnoDB
*************************** 22. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: slave_master_info
 ENGINE: InnoDB
*************************** 23. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: slave_relay_log_info
 ENGINE: InnoDB
*************************** 24. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: slave_worker_info
 ENGINE: InnoDB
*************************** 25. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: slow_log
 ENGINE: CSV
*************************** 26. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: tables_priv
 ENGINE: InnoDB
*************************** 27. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: time_zone
 ENGINE: InnoDB
*************************** 28. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: time_zone_leap_second
 ENGINE: InnoDB
*************************** 29. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: time_zone_name
 ENGINE: InnoDB
*************************** 30. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: time_zone_transition
 ENGINE: InnoDB
*************************** 31. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: time_zone_transition_type
 ENGINE: InnoDB
*************************** 32. row ***************************
TABLE_SCHEMA: mysql
 TABLE_NAME: user
 ENGINE: InnoDB
32 rows in set (0,00 sec)

So, here, I presented three topics with reference of the new features coming with the new MySQL 8.0 DMR 1. I’m working on another post that will be released here within the coming days to show more new about that.

AddThis Social Bookmark Button

Exploring InnoDB Schema Partial Backups with Percona Xtrabackup

março 31st, 2015 Bianchi Posted in MySQL Tuning No Comments »

logo_percona_xtrabackup_newI remember the time when all the database and sys admins used to speak about MySQL backup strategy and it was really something to not worry about too much for many reasons. One of them was that the dataset on MySQL schemas was not too big, not that critical and the information was not that sensible such as today. Yes, as time went by, I’ve seen many organisations using MySQL to store really sensible information such as banks, vehicles manufactures and critical information in a sense of “we must be ready all time, my customer needs our services 24×7“.

This is not just Facebook or Twitter, even LinkedIn or Google, but, many companies around the world such as Booking.com needs systems ready all the time. Regardless of their scale-out or HA strategy, a good tool for export/import tables and even backing up databases is too important and this is what I’ve planned to write here to register all my adventures with xtrabackup and InnoDB. If you run MyISAM, maybe you can face a simple script to cold backup tables and period end, considering FLUSH TABLES WITH READ LOCK or even a moment in which you can just put down everything, copy files and put the database backup up again (can be a little bit different and not that simpler, but, it’s something like that).

The scenario of partial backups

Starting up with a sample of the online documentation:

There is only one caveat about partial backups: do not copy back the prepared backup. Restoring partial backups should be done by importing the tables, not by using the traditional –copy-back option. Although there are some scenarios where restoring can be done by copying back the files, this may be lead to database inconsistencies in many cases and it is not the recommended way to do it.

My problem was very clear at the first sight, we’ve got a huge amount of information on our MySQL’s schemas and part of the biggest one is not needed to be backed up. Explaining more about the scenario, there are 29 schemas and the biggest one is not completely necessary to be backed up due to our business rules. A special SLAVE server that is dedicated to sales processes does not need the whole dataset of the biggest schema and then, we don’t need to spent all the server’s disk space with useless data (in the context of this slave server). Besides that, a huge list of replicate-ignore-table can be found in the MySQL configuration file and from that I start thinking about how to solve this problem using partial backups with a file listing all the tables part of a backup!

The first step was to select all the tables of the biggest schema, different of those pointed out in replicate-ignore-table options and the have the results into a file. Second step was to select all other schemas different then the biggest schema. Bottom line, I merges files and got the file to backup just the tables of my interest in this task. Unfortunately I cannot post the real data I’ve worked with for obvious reasons, but, I will try to use some examples…

#: let's create some databases

mysql> create database db1;
Query OK, 1 row affected (0.03 sec)

mysql> create database db2;
Query OK, 1 row affected (0.00 sec)

mysql> create database db3;
Query OK, 1 row affected (0.00 sec)

#: let's create some tables

mysql> create table db1.t1(i int);
Query OK, 0 rows affected (0.31 sec)

mysql> create table db1.t2(i int);
Query OK, 0 rows affected (0.24 sec)

mysql> create table db1.t3(i int);
Query OK, 0 rows affected (0.04 sec)

mysql> create table db2.t1(i int);
Query OK, 0 rows affected (0.22 sec)

mysql> create table db2.t2(i int);
Query OK, 0 rows affected (0.22 sec)

mysql> create table db2.t3(i int);
Query OK, 0 rows affected (0.30 sec)

mysql> create table db3.t1(i int);
Query OK, 0 rows affected (0.41 sec)

mysql> create table db3.t2(i int);
Query OK, 0 rows affected (0.32 sec)

mysql> create table db3.t3(i int);
Query OK, 0 rows affected (0.18 sec)

This way, I’ve got the following MySQL’s structures upon disk:

[root@mysql01 opt]# mysqldiskusage --server=root:123456@localhost:3306:/var/lib/mysql/mysql.sock --all
WARNING: Using a password on the command line interface can be insecure.
# Source on localhost: ... connected.
# Database totals:
+---------------------+------------+
| db_name             |     total  |
+---------------------+------------+
| db1                 | 373,887    |
| db2                 | 373,887    |
| db3                 | 373,887    |
| mysql               | 1,577,981  |
| performance_schema  | 489,543    |
+---------------------+------------+

Total database disk usage = 3,189,185 bytes or 3.04 MB

# Log information.
# The general_log is turned off on the server.
# The slow_query_log is turned off on the server.
+-------------+---------+
| log_name    |   size  |
+-------------+---------+
| mysqld.log  | 36,043  |
+-------------+---------+

Total size of logs = 36,043 bytes or 35.20 KB

# Binary log information:
Current binary log file = mysql01-bin.000041
+---------------------+-------+
| log_file            | size  |
+---------------------+-------+
| mysql01-bin.000001  | 1825  |
| mysql01-bin.000002  | 570   |
| mysql01-bin.000003  | 240   |
| mysql01-bin.000004  | 240   |
[...]
| mysql01-bin.index   | 1280  |
+---------------------+-------+

Total size of binary logs = 15,234 bytes or 14.88 KB

# Relay log information:
Current relay log file = mysqld-relay-bin.000003
+--------------------------+-------+
| log_file                 | size  |
+--------------------------+-------+
| mysqld-relay-bin.000003  | 143   |
| mysqld-relay-bin.000004  | 143   |
| mysqld-relay-bin.000005  | 120   |
| mysqld-relay-bin.index   | 78    |
+--------------------------+-------+

Total size of relay logs = 484 bytes

# InnoDB tablespace information:
+--------------+-------------+
| innodb_file  |       size  |
+--------------+-------------+
| ib_logfile0  | 50,331,648  |
| ib_logfile1  | 50,331,648  |
| ibdata1      | 12,582,912  |
+--------------+-------------+

Total size of InnoDB files = 113,246,208 bytes or 108.00 MB

#...done.

Ok, after this creation processes to simulate what I’m going to blog here, I’ll assume that the the biggest schema here is the db1 and we don’t need to backup all the tables. The only table on db1 that is required for this backup is t1 and then, all other databases including mysql and performance_schema are required (even having the mysql_upgrade execution creating/upgrading performance_schema by chance). This way I can get now the list of tables of all databases excluding those I don’t want from the db1 – t2, t3.

mysql> SELECT CONCAT(TABLE_SCHEMA,'.',TABLE_NAME) INTO OUTFILE '/tmp/tablenames-db1' LINES TERMINATED BY '\n' 
    -> FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='db1' AND TABLE_NAME NOT IN ('t2','t3');
Query OK, 1 row affected (0.00 sec)

mysql> \! cat /tmp/tablenames-db1
db1.t1

mysql> SELECT CONCAT(TABLE_SCHEMA,'.',TABLE_NAME) INTO OUTFILE '/tmp/tablename' LINES TERMINATED BY '\n' 
    -> FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA IN ('db2','db3','mysql', 'performance_schema');
Query OK, 86 rows affected (0.00 sec)

mysql> \! cat /tmp/tablenames-db1 >> /tmp/tablename

mysql> \! cat /tmp/tablename
db2.t1
db2.t2
db2.t3
db3.t1
db3.t2
db3.t3
mysql.columns_priv
mysql.db
mysql.event
mysql.func
mysql.general_log
mysql.help_category
mysql.help_keyword
mysql.help_relation
mysql.help_topic
mysql.innodb_index_stats
mysql.innodb_table_stats
mysql.ndb_binlog_index
mysql.plugin
mysql.proc
mysql.procs_priv
mysql.proxies_priv
mysql.servers
mysql.slave_master_info
mysql.slave_relay_log_info
mysql.slave_worker_info
mysql.slow_log
mysql.tables_priv
mysql.time_zone
mysql.time_zone_leap_second
mysql.time_zone_name
mysql.time_zone_transition
mysql.time_zone_transition_type
mysql.user
performance_schema.accounts
performance_schema.cond_instances
performance_schema.events_stages_current
performance_schema.events_stages_history
performance_schema.events_stages_history_long
performance_schema.events_stages_summary_by_account_by_event_name
performance_schema.events_stages_summary_by_host_by_event_name
performance_schema.events_stages_summary_by_thread_by_event_name
performance_schema.events_stages_summary_by_user_by_event_name
performance_schema.events_stages_summary_global_by_event_name
performance_schema.events_statements_current
performance_schema.events_statements_history
performance_schema.events_statements_history_long
performance_schema.events_statements_summary_by_account_by_event_name
performance_schema.events_statements_summary_by_digest
performance_schema.events_statements_summary_by_host_by_event_name
performance_schema.events_statements_summary_by_thread_by_event_name
performance_schema.events_statements_summary_by_user_by_event_name
performance_schema.events_statements_summary_global_by_event_name
performance_schema.events_waits_current
performance_schema.events_waits_history
performance_schema.events_waits_history_long
performance_schema.events_waits_summary_by_account_by_event_name
performance_schema.events_waits_summary_by_host_by_event_name
performance_schema.events_waits_summary_by_instance
performance_schema.events_waits_summary_by_thread_by_event_name
performance_schema.events_waits_summary_by_user_by_event_name
performance_schema.events_waits_summary_global_by_event_name
performance_schema.file_instances
performance_schema.file_summary_by_event_name
performance_schema.file_summary_by_instance
performance_schema.host_cache
performance_schema.hosts
performance_schema.mutex_instances
performance_schema.objects_summary_global_by_type
performance_schema.performance_timers
performance_schema.rwlock_instances
performance_schema.session_account_connect_attrs
performance_schema.session_connect_attrs
performance_schema.setup_actors
performance_schema.setup_consumers
performance_schema.setup_instruments
performance_schema.setup_objects
performance_schema.setup_timers
performance_schema.socket_instances
performance_schema.socket_summary_by_event_name
performance_schema.socket_summary_by_instance
performance_schema.table_io_waits_summary_by_index_usage
performance_schema.table_io_waits_summary_by_table
performance_schema.table_lock_waits_summary_by_table
performance_schema.threads
performance_schema.users
db1.t1

This way I produced the file to be used with the option –tables-file. Ok, now it’s time for the backup:

[root@mysql01 opt]# sudo innobackupex --user=root --password=123456 --tables-file=/tmp/tablename --history=partial01 /opt

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013.  All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

150331 17:32:29  innobackupex: Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup' as 'root'  (using password: YES).
150331 17:32:29  innobackupex: Connected to MySQL server
150331 17:32:29  innobackupex: Executing a version check against the server...
150331 17:32:29  innobackupex: Done.
150331 17:32:29  innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
           At the end of a successful backup run innobackupex
           prints "completed OK!".

innobackupex:  Using server version 5.6.23-log

innobackupex: Created backup directory /opt/2015-03-31_17-32-29

150331 17:32:29  innobackupex: Starting ibbackup with command: xtrabackup  --defaults-group="mysqld" --backup --suspend-at-end --target-dir=/opt/2015-03-31_17-32-29 --innodb_log_file_size="50331648" --innodb_data_file_path="ibdata1:12M:autoextend" --tmpdir=/tmp --extra-lsndir='/tmp' --tables_file='/tmp/tablename'
innobackupex: Waiting for ibbackup (pid=4771) to suspend
innobackupex: Suspend file '/opt/2015-03-31_17-32-29/xtrabackup_suspended_2'

xtrabackup version 2.2.10 based on MySQL server 5.6.22 Linux (x86_64) (revision id: )
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
>> log scanned up to (1694982)
xtrabackup: Generating a list of tablespaces
>> log scanned up to (1694982)
[01] Copying ./ibdata1 to /opt/2015-03-31_17-32-29/ibdata1
[01]        ...done
>> log scanned up to (1694982)
[01] Copying ./mysql/innodb_index_stats.ibd to /opt/2015-03-31_17-32-29/mysql/innodb_index_stats.ibd
[01]        ...done
[01] Copying ./mysql/slave_worker_info.ibd to /opt/2015-03-31_17-32-29/mysql/slave_worker_info.ibd
[01]        ...done
[01] Copying ./mysql/innodb_table_stats.ibd to /opt/2015-03-31_17-32-29/mysql/innodb_table_stats.ibd
[01]        ...done
[01] Copying ./mysql/slave_master_info.ibd to /opt/2015-03-31_17-32-29/mysql/slave_master_info.ibd
[01]        ...done
[01] Copying ./mysql/slave_relay_log_info.ibd to /opt/2015-03-31_17-32-29/mysql/slave_relay_log_info.ibd
[01]        ...done
[01] Copying ./db3/t1.ibd to /opt/2015-03-31_17-32-29/db3/t1.ibd
[01]        ...done
[01] Copying ./db3/t2.ibd to /opt/2015-03-31_17-32-29/db3/t2.ibd
[01]        ...done
>> log scanned up to (1694982)
[01] Copying ./db3/t3.ibd to /opt/2015-03-31_17-32-29/db3/t3.ibd
[01]        ...done
[01] Copying ./db2/t1.ibd to /opt/2015-03-31_17-32-29/db2/t1.ibd
[01]        ...done
[01] Copying ./db2/t2.ibd to /opt/2015-03-31_17-32-29/db2/t2.ibd
[01]        ...done
[01] Copying ./db2/t3.ibd to /opt/2015-03-31_17-32-29/db2/t3.ibd
[01]        ...done
[01] Copying ./db1/t1.ibd to /opt/2015-03-31_17-32-29/db1/t1.ibd
[01]        ...done
>> log scanned up to (1694982)
xtrabackup: Creating suspend file '/opt/2015-03-31_17-32-29/xtrabackup_suspended_2' with pid '4771'

150331 17:32:34  innobackupex: Continuing after ibbackup has suspended
150331 17:32:34  innobackupex: Executing FLUSH TABLES WITH READ LOCK...
150331 17:32:34  innobackupex: All tables locked and flushed to disk

150331 17:32:34  innobackupex: Starting to backup non-InnoDB tables and files
innobackupex: in subdirectories of '/var/lib/mysql/'
innobackupex: Backing up files '/var/lib/mysql//mysql/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (74 files)
>> log scanned up to (1694982)
>> log scanned up to (1694982)
>> log scanned up to (1694982)
innobackupex: Backing up files '/var/lib/mysql//performance_schema/*.{frm,isl,MYD,MYI,MAD,MAI,MRG,TRG,TRN,ARM,ARZ,CSM,CSV,opt,par}' (53 files)
>> log scanned up to (1694982)
innobackupex: Backing up file '/var/lib/mysql//db3/t3.frm'
innobackupex: Backing up file '/var/lib/mysql//db3/t1.frm'
innobackupex: Backing up file '/var/lib/mysql//db3/t2.frm'
>> log scanned up to (1694982)
innobackupex: Backing up file '/var/lib/mysql//db2/t3.frm'
innobackupex: Backing up file '/var/lib/mysql//db2/t1.frm'
innobackupex: Backing up file '/var/lib/mysql//db2/t2.frm'
innobackupex: Backing up file '/var/lib/mysql//db1/t1.frm'
150331 17:32:38  innobackupex: Finished backing up non-InnoDB tables and files

150331 17:32:38  innobackupex: Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
150331 17:32:38  innobackupex: Waiting for log copying to finish

xtrabackup: The latest check point (for incremental): '1694982'
xtrabackup: Stopping log copying thread.
.>> log scanned up to (1694982)

xtrabackup: Creating suspend file '/opt/2015-03-31_17-32-29/xtrabackup_log_copied' with pid '4771'
xtrabackup: Transaction log of lsn (1694982) to (1694982) was copied.
150331 17:32:39  innobackupex: All tables unlocked

innobackupex: Backup created in directory '/opt/2015-03-31_17-32-29'
innobackupex: MySQL binlog position: GTID of the last change 'f2b66a45-ce62-11e4-8a01-0800274fb806:1-18'
innobackupex: Backup history record uuid edfd8656-d7cb-11e4-9cd1-0800274fb806 successfully written
150331 17:32:40  innobackupex: Connection to database server closed
150331 17:32:40  innobackupex: completed OK!

Observing carefully the xtrabackup output, one can quickly see that the tables we left out really stayed out and this is the result I was looking for, not news here. Until now we’ve been working to get things working s clearly as possible and work like a charm. But, this is not enough to avoid problems when using the backup directory produced by xtrabackup (in this case, /opt/2015-03-31_17-32-29). If one use the /opt/2015-03-31_17-32-29 as MySQL DATADIR at this point, when start up mysqld, this error message below will be seen for each table which is not part of the backup:

2015-03-30 21:27:56 44823 [ERROR] InnoDB: Tablespace open failed for '"db1"."t2"', ignored.
2015-03-30 21:27:56 7ff5d9f92720  InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.

2015-03-30 21:27:56 44823 [ERROR] InnoDB: Tablespace open failed for '"db1"."t3"', ignored.
2015-03-30 21:27:56 7ff5d9f92720  InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
InnoDB: If you are installing InnoDB, remember that you must create
InnoDB: directories yourself, InnoDB does not create them.

To avoid this, a second step is needed to clean out all the metadata from ibdata1 (prepare phase!!):

[root@mysql01 opt]# sudo innobackupex --user=root --password=123456 --apply-log /opt/2015-03-31_17-32-29

InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy
and Percona LLC and/or its affiliates 2009-2013.  All Rights Reserved.

This software is published under
the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.

Get the latest version of Percona XtraBackup, documentation, and help resources:
http://www.percona.com/xb/p

150331 17:41:06  innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
           At the end of a successful apply-log run innobackupex
           prints "completed OK!".


150331 17:41:07  innobackupex: Starting ibbackup with command: xtrabackup  --defaults-file="/opt/2015-03-31_17-32-29/backup-my.cnf"  --defaults-group="mysqld" --prepare --target-dir=/opt/2015-03-31_17-32-29

xtrabackup version 2.2.10 based on MySQL server 5.6.22 Linux (x86_64) (revision id: )
xtrabackup: cd to /opt/2015-03-31_17-32-29
xtrabackup: This target seems to be not prepared yet.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(1694982)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 1
xtrabackup:   innodb_log_file_size = 2097152
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 1
xtrabackup:   innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Not using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 100.0M
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: The log sequence numbers 1638299 and 1638299 in ibdata files do not match the log sequence number 1694982 in the ib_logfiles!
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: Last MySQL binlog file position 0 1802, file name mysql01-bin.000001
InnoDB: Table db1/t2 in the InnoDB data dictionary has tablespace id 8, but tablespace with that id or name does not exist. Have you deleted or moved .ibd files? This may also be a table created with CREATE TEMPORARY TABLE whose .ibd and .frm files MySQL automatically removed, but the table still exists in the InnoDB internal data dictionary.
InnoDB: It will be removed from the data dictionary.
InnoDB: Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html
InnoDB: for how to resolve the issue.
InnoDB: Table db1/t3 in the InnoDB data dictionary has tablespace id 9, but tablespace with that id or name does not exist. Have you deleted or moved .ibd files? This may also be a table created with CREATE TEMPORARY TABLE whose .ibd and .frm files MySQL automatically removed, but the table still exists in the InnoDB internal data dictionary.
InnoDB: It will be removed from the data dictionary.
InnoDB: Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html
InnoDB: for how to resolve the issue.
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.22 started; log sequence number 1694982

[notice (again)]
  If you use binary log and don't use any hack of group commit,
  the binary log position seems to be:
InnoDB: Last MySQL binlog file position 0 1802, file name mysql01-bin.000001

xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 1696565

150331 17:41:10  innobackupex: Restarting xtrabackup with command: xtrabackup  --defaults-file="/opt/2015-03-31_17-32-29/backup-my.cnf"  --defaults-group="mysqld" --prepare --target-dir=/opt/2015-03-31_17-32-29
for creating ib_logfile*

xtrabackup version 2.2.10 based on MySQL server 5.6.22 Linux (x86_64) (revision id: )
xtrabackup: cd to /opt/2015-03-31_17-32-29
xtrabackup: This target seems to be already prepared.
xtrabackup: notice: xtrabackup_logfile was already used to '--prepare'.
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup:   innodb_data_home_dir = ./
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 50331648
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 104857600 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.3
InnoDB: Not using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 100.0M
InnoDB: Completed initialization of buffer pool
InnoDB: Setting log file ./ib_logfile101 size to 48 MB
InnoDB: Setting log file ./ib_logfile1 size to 48 MB
InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
InnoDB: New log files created, LSN=1696565
InnoDB: Highest supported file format is Barracuda.
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.22 started; log sequence number 1696780

[notice (again)]
  If you use binary log and don't use any hack of group commit,
  the binary log position seems to be:
InnoDB: Last MySQL binlog file position 0 1802, file name mysql01-bin.000001

xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 1696790
150331 17:41:12  innobackupex: completed OK!

Doing this way, one can just transfer the backupset, if it’s huge size, try ftp files between servers, change the owner of the new directory and point MySQL’s DATADIR variable to it and finally, restart mysqld monitoring the error log:

150331 17:46:32 mysqld_safe Starting mysqld daemon with databases from /opt/2015-03-31_17-32-29
2015-03-31 17:46:33 4759 [Note] Plugin 'FEDERATED' is disabled.
2015-03-31 17:46:34 4759 [Note] InnoDB: Using atomics to ref count buffer pool pages
2015-03-31 17:46:34 4759 [Note] InnoDB: The InnoDB memory heap is disabled
2015-03-31 17:46:34 4759 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2015-03-31 17:46:34 4759 [Note] InnoDB: Memory barrier is not used
2015-03-31 17:46:34 4759 [Note] InnoDB: Compressed tables use zlib 1.2.3
2015-03-31 17:46:34 4759 [Note] InnoDB: Using Linux native AIO
2015-03-31 17:46:34 4759 [Note] InnoDB: Not using CPU crc32 instructions
2015-03-31 17:46:34 4759 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2015-03-31 17:46:34 4759 [Note] InnoDB: Completed initialization of buffer pool
2015-03-31 17:46:34 4759 [Note] InnoDB: Highest supported file format is Barracuda.
2015-03-31 17:46:34 4759 [Note] InnoDB: 128 rollback segment(s) are active.
2015-03-31 17:46:34 4759 [Note] InnoDB: Waiting for purge to start
2015-03-31 17:46:34 4759 [Note] InnoDB: 5.6.23 started; log sequence number 1696790
2015-03-31 17:46:35 4759 [Note] Server hostname (bind-address): '*'; port: 3306
2015-03-31 17:46:35 4759 [Note] IPv6 is available.
2015-03-31 17:46:35 4759 [Note]   - '::' resolves to '::';
2015-03-31 17:46:35 4759 [Note] Server socket created on IP: '::'.
2015-03-31 17:46:36 4759 [Note] Event Scheduler: Loaded 0 events
2015-03-31 17:46:36 4759 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.6.23-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)

It must be as clean as possible to convince that everything went well with the backupset produce and with all the processes to get it done. I’ve just tried it with log_warnings=3 and ended up getting a clean log once again.

Some questions have brought on your mind? Fire up a comment!!

AddThis Social Bookmark Button

Working with MySQL on SSD

novembro 6th, 2014 Bianchi Posted in MySQL A&D, MySQL Tuning No Comments »

I’d like to start this post or entry registering that even SSD cards or disks provides very low latency and faster random reads/writes, I consider that it’s new to MySQLers and at least on MySQL World. New based on the information we can find on the internet in form of collaboration to make it to run maybe “like a charm” and by the presentations we’ve been seeing more and more on the last months. Things like SLC and MLC have been better explained now than before and what I’ve seen is that MySQL Team has collaborate a lot with all these *new* things to make the software of MySQL to scale more in terms of I/O usage, delivering better results in terms simple SELECT queries to heavy ALTER TABLE. What I’ve waited when SSD come into a plan of a special customer I’m working with in Brazil is that all the queries could perform better just by being over faster disks – this is not true. Many tests have been done where I’m using sysbench 0.5 and more than table as discussed on twitter with @cpeintre, @morgo and @stoker. Sysbench results will be on focus soon in this post.

Presenting, Dell Compellent SC8000, the storage!

My first desire was to have Fusion IO cards to run some MySQL’s files on that, to make it easier as the market is doing that since some time ago. I’ve seen for years many guys speaking about those flash cards delivering lots of IOPS and making MySQL run faster. BTW, when our Dell’s contact presented the Dell Compellent SC8000, we saw the possibility to expand the IT environment towards a more professional environment and due to the possibility to scale the hardware in case we need to provide more space on our database layer. This storage, aka “external storage” represents a large investment and a giant step in terms of environment professionalism and was thought like a something that will provide all the needed IOPS and speed we need to solve problems of queries to feed reports to replication lags that happens with no sense (we can go into details of it later on another post). Detailing so the storage, it has an intelligence to always write on SLC flash disks organized in RAID 10 (tier 1), always read from MLC flash disks organized in RAID 5 (tier 2) and not accessed data/pages are moved *initially* in 12 days to the 10k mechanic disks in RAID 6 which is the tier 3.

Additionally, the tier 2 is the hot area where all the more accessed data resides. When a data is inserted into the database, it’s moved to tier 2 in background and if not accessed, moved away to mechanical disks, less privileged area. It seems to me that internally this storage has a kind of hash table with all the pages contained on the hot area, that is, tier 2, and from times to times it is going to move the less accessed pages. In case of tier 2 gets full, less accessed pages will be moved to tier 3 before the 12th day. Basically, once can choose a profile to create a new LUN. This profile can be comprised by tier 1 only, tier 2 only, tier 3 only or any combination among them. The connectivity among storage and servers is done by a dedicated fiber channel network, using a HBA of 8GB Dual Port (round-robin).

Nice…it’s flexible. More here…

Test with FIO

Theoretically, all those things seemed OK and we went for a test for FIO. The test went very well, and it’s just create a LUN with a profile such as “automatic”, containing all the existent tiers and mount it on a Linux machine, which is Red Hat 6.5. After writing a configuration file to be read by FIO simulating what MySQL does on our environment, it was executed on both scenarios: (NDB2) our server running all MySQL files on HDD, (NDB3) on the other server running MySQL files on SSD. The FIO script is that below:

[random-writes]
; this test was written by Bianchi
; me at wagnerbianchi.com
runtime=1800 # 30mins
size=2G
threads=1
numjobs=16
ioengine=libaio
iodepth=32
bs=5k
; innodb related stuff
lockfile=readwrite # simulating row-locking
rw=randwrite       # writing in pages randomly
direct=0           # O_DSYNC
sync=0             # O_DSYNC
refill_buffers     # Buffer Pool load pages
openfiles=1000

My intention configuring direct=0 and sync=0 was to perform the same thing we have on our current production environment, deliver all the writes to a battery backed cache and get back. The test results:

Server Job # IO (MB) IO (Qtd) TIME (ms)
NDB2     1   1965.4       368   1091316
NDB2     2   2047.2       498    841042
NDB2     3   2047.2       380   1103541
NDB2     4   1704.3       443    787271
NDB2     5   2047.2       471    889231
NDB2     6   2015.6       434    951029
NDB2     7   2047.2       411   1020253
NDB2     8   2047.2       387   1081822
NDB2     9   2047.2       481    870714
NDB2    10   2011.1       549    749626
NDB2    11   1633.6       740    452040
NDB2    12   2047.2       488    858940
NDB2    13   2047.2       378   1107883
NDB2    14   1945.6       602    661052
NDB2    15   2047.2       585    716770
NDB2    16   2000.9       601    680994

NDB2-FIO

Server  Job # IO (MB) IO (Qtd) TIME (ms)
STORAGE     1  1965.4     2115    190270
STORAGE     2  2047.2     2925    143387
STORAGE     3  2047.2     3212    130562
STORAGE     4  1704.3     2910    119915
STORAGE     5  2047.2     3010    139334
STORAGE     6  2015.6     2138    193032
STORAGE     7  2047.2     3073    136465
STORAGE     8  2047.2     2791    150233
STORAGE     9  2047.2     2415    173628
STORAGE    10  2011.1     3027    136085
STORAGE    11  1633.6     2186    153012
STORAGE    12  2047.2     2700    155319
STORAGE    13  2047.2     2779    150917
STORAGE    14  1945.6     2886    138059
STORAGE    15  2047.2     2785    150573
STORAGE    16  2000.9     2865    142991

STG-FIO

While IOPS are a way more on SSD, latency behavior is a way less. Next step was to setup everything and get the storage working inside our main DC and mount a LUN on some server to carry on with tests. The first sysbench I ran was was using Percona Server 5.5.37.1 and even configuring innodb_adaptive_flush_method as keep_average, neighbor pages as area and changing the redo logs block size to 4096, MySQL wasn’t able to use all the I/O we were waiting. it was a time that, speaking with @morgo, the version upgrade come into the scene and I went for it. The only barrier I had upgrading the 5.5 to 5.6 was the question around temporal data types we discussed with some folks on the Official MySQL Forum. Even having the replication between 5.5.37 (master prod) and 5.6.21 (new comer slave with SSD) running well for more than 10 hours, I decided to apply the solution proposed by Johnaton Coombes. It’s running well until now…

Sysbench’ing

After seeing that the storage really deliver what we’re looking for to check what’s the best configuration to put MySQL to run on SSD. After reading the Matsunobu entry on his blog, I rearranged everything considering sequential written files on HDD and just tables and the shared tablespace on SSD (however it’s possible to put undo files on SSD and all other things of ibdata1 on HDD). That gave me new numbers and replication gained more throughput having relay logs accompanied by redo logs, error log and slow query logs. Thanks for @cpeintre to give a hint to have more than one table to sysbench situations and for @lefred to host sysbench rpm package on his blog (it’s nice).

innodb_flush_method and innodb_flush_logs_at_trx_commit

At this time I’ve started some tests to consider the best combination of some important parameters to better handle the InnoDB workflow. On my current environment using mechanic disks, I’ve configured mysqld to use more and more memory and file system cache, taking into account that my underlying hardware relies on some disk controllers with battery backed cache of 512MB – this permits my whole system to deliver almost 9.300K IOPS using RAID 1. My intention here is to test innodb_flush_logs_at_trx_commit as 1 when flush_method is O_DIRECT and innodb_flush_logs_at_trx_commit {0|2} when flush_method is O_DSYNC – I’d like to remember that I’m using MySQL Oracle.

Considering that O_DSYNC and flush_method as 0|2 had the same results…

Screenshot 2014-11-06 11.34.33

Let’s benchmark it so.

--innodb_io_capacity=2000
--innodb_lru_scan_depth=2500
--innodb_flush_logs_at_trx_commit=1
--innodb_flush_method=O_DIRECT
Screenshot 2014-11-06 11.18.12
 Screenshot 2014-11-06 11.18.58
--innodb_io_capacity=2000
--innodb_lru_can_depth=2500
--innodb_flush_logs_at_trx_commit=0
--innodb_flush_method=O_DSYNC
Screenshot 2014-11-06 11.22.47
Screenshot 2014-11-07 10.53.08


The final summary was:

Screenshot 2014-11-06 11.42.01
 Screenshot 2014-11-06 11.46.45

innodb_io_capacity and innodb_lru_scan_depth

After reading the blog entry written by Mark Callaghan on 2013 about these both system variables, I decided to have the value on both as a start point. As it’s well explained by Mark on his blog entry and using twitter, as here, both variables will give mysqld more IOPS if there are more resources like that on the system. So I went form 1000 to 3000 to make it reasonable for io_capacity and did the same for lru_scan.

#: Sysbench line used here:
[bianchi@ndb2 db]$ sudo sysbench --test=oltp.lua --oltp-table-size=1000000 --mysql-db=test --oltp-tables-count=10 --mysql-user=bianchi --db-driver=mysql --mysql-table-engine=innodb --max-time=300 --max-requests=0 --report-interval=60 --num-threads=500 --mysql-socket=/var/mysql/logs/mysql.sock --mysql-engine-trx=yes run

1-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 1000 |                    1000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2895.09, reads/s: 43241.46, writes/s: 11824.06, response time: 1278.56ms (95%)
[ 120s] threads: 500, tps: 2919.87, reads/s: 43432.81, writes/s: 11914.27, response time: 1387.02ms (95%)
[ 180s] threads: 500, tps: 2911.20, reads/s: 43266.95, writes/s: 11875.58, response time: 1397.43ms (95%)
[ 240s] threads: 500, tps: 2896.17, reads/s: 43039.52, writes/s: 11812.63, response time: 1385.36ms (95%)
[ 300s] threads: 500, tps: 2881.70, reads/s: 42842.40, writes/s: 11756.67, response time: 1382.87ms (95%)

2-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    2000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2834.36, reads/s: 42276.71, writes/s: 11570.30, response time: 1293.57ms (95%)
[ 120s] threads: 500, tps: 2964.74, reads/s: 44071.70, writes/s: 12094.58, response time: 1383.70ms (95%)
[ 180s] threads: 500, tps: 2943.48, reads/s: 43790.31, writes/s: 12011.63, response time: 1380.39ms (95%)
[ 240s] threads: 500, tps: 2940.23, reads/s: 43772.47, writes/s: 12002.10, response time: 1381.63ms (95%)
[ 300s] threads: 500, tps: 2961.58, reads/s: 44007.70, writes/s: 12079.94, response time: 1376.67ms (95%)

3-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    4000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2835.78, reads/s: 42283.84, writes/s: 11577.04, response time: 1287.78ms (95%)
[ 120s] threads: 500, tps: 2866.35, reads/s: 42659.13, writes/s: 11697.75, response time: 1418.51ms (95%)
[ 180s] threads: 500, tps: 2901.80, reads/s: 43129.23, writes/s: 11834.54, response time: 1383.28ms (95%)
[ 240s] threads: 500, tps: 2924.12, reads/s: 43527.28, writes/s: 11934.51, response time: 1394.09ms (95%)
[ 300s] threads: 500, tps: 2928.04, reads/s: 43537.30, writes/s: 11946.43, response time: 1390.76ms (95%)

4-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    3000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2915.01, reads/s: 43438.88, writes/s: 11896.84, response time: 1276.65ms (95%)
[ 120s] threads: 500, tps: 3003.12, reads/s: 44634.98, writes/s: 12248.90, response time: 1345.71ms (95%)
[ 180s] threads: 500, tps: 2983.62, reads/s: 44394.64, writes/s: 12174.23, response time: 1372.15ms (95%)
[ 240s] threads: 500, tps: 2971.40, reads/s: 44181.10, writes/s: 12122.10, response time: 1361.10ms (95%)
[ 300s] threads: 500, tps: 2976.20, reads/s: 44241.53, writes/s: 12140.61, response time: 1360.70ms (95%)

5-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    2500 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2915.46, reads/s: 43605.14, writes/s: 11914.68, response time: 1207.51ms (95%)
[ 120s] threads: 500, tps: 2993.02, reads/s: 44541.72, writes/s: 12214.99, response time: 1358.26ms (95%)
[ 180s] threads: 500, tps: 3004.48, reads/s: 44628.71, writes/s: 12254.80, response time: 1346.52ms (95%)
[ 240s] threads: 500, tps: 3014.33, reads/s: 44839.96, writes/s: 12298.70, response time: 1366.41ms (95%)
[ 300s] threads: 500, tps: 2974.83, reads/s: 44291.42, writes/s: 12142.27, response time: 1357.04ms (95%)

Summarizing the above collected facts, in terms of…

Response Times

Screenshot 2014-11-06 15.22.32

TPS

Screenshot 2014-11-06 15.26.02

Reads/Writes

Screenshot 2014-11-06 15.25.47

innodb_log_buffer_size

This was configured used a large value and it was annoying me  a little. After fiding the Shlomi Noach blog entry with a good query to check the size of transactions that populate the log buffer, its seems very important to have in place a more accurate configuration.

ndb2 mysql> SELECT
    ->   innodb_os_log_written_per_minute*60
    ->     AS estimated_innodb_os_log_written_per_hour,
    ->   CONCAT(ROUND(innodb_os_log_written_per_minute*60/1024/1024, 1), 'MB')
    ->     AS estimated_innodb_os_log_written_per_hour_mb
    -> FROM
    ->   (SELECT SUM(value) AS innodb_os_log_written_per_minute FROM (
    ->     SELECT -VARIABLE_VALUE AS value
    ->       FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    ->       WHERE VARIABLE_NAME = 'innodb_os_log_written'
    ->     UNION ALL
    ->     SELECT SLEEP(60)
    ->       FROM DUAL
    ->     UNION ALL
    ->     SELECT VARIABLE_VALUE
    ->       FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    ->       WHERE VARIABLE_NAME = 'innodb_os_log_written'
    ->   ) s1
    -> ) s2
    -> \G
*************************** 1. row ***************************
   estimated_innodb_os_log_written_per_hour: 1008476160
estimated_innodb_os_log_written_per_hour_mb: 961.8MB
1 row in set (59.99 sec)

ndb2 mysql> SELECT (961.8/60)\G
*************************** 1. row ***************************
(961.8/60): 16.03000
1 row in set (0.00 sec)

Operating System Demanding Tuning

All machine servers planned to be placed upon the storage runs the Red Hat 6.5. After updating the operating systems packages we followed the recommendations of this paper released by Oracle , differing just the scheduler/elevator which we decided to use [NOOP]. In the midst of the configuration path that has run for some days, we had a case when we forgot to make this below configuration and we had a chance to see the the performance has improved by 30%, considering replication lagging and query executions for reads and writes. As the storage attached/mounted on file system is represented by an alias or device mapper (appears as dm-X) for all the underlying disks, it’s possible to configure just the device mappers in order to make all these things to work properly with NOOP.

$ echo 10000 > /sys/block/sdb/queue/nr_requests
$ echo 1024 > /sys/block/sdb/queue/max_sectors_kb
$ echo 0 > /sys/block/sdb/queue/rotational
$ echo 0 > /sys/block/sdb/queue/add_random
$ echo 0 > /sys/block/sdb/queue/rq_affinity

You can check the meaning of each configuration here on the Red Hat’s Knowledge Base. Additionally, it was very interesting to place all the above configuration on /etc/rc.local.

Replication Lagging Problems

OK, the scenario #5 is the best at all. So, my feelings are that in some point all those benchmarks done with sysbench lied completely! When I configured my server on SSD and get it replicating, catching up on master data, the lagging wasn’t decreased after an hour. Instead, the lagging increased and the slave server was getting far and far from master, almost fading away on the road. Thinking about the configuration I’ve got on my stablished environment, I decided to set it up as O_DSYNC, relying on the file system cache and storage controller battery backed cache (64GB), configuring innodb_flush_logs_at_trx_commit as 0 as well. Things started getting a little bit more faster since the lagging was stopped on the same number of Seconds_Behind_Master. Ok, I made this fucking thing to decrease when I tuned well the innodb_log_buffer_size as I told on some sections above and then replication lags disappeared, being this new server the only server that stays always behind the red state of lagging which is 50 seconds (our company threshold). First of all I configured log_buffer as 8M, but, checking properly status variables, I saw many pending syncs accumulating there. Jumped to 32M and now everything is OK. Next step as I’m running 5.6 now is to jump into this party and start using PERFORMANCE_SCHEMA and other smart things to monitor the environment in order to increase throughput and get even less response time.

By the way, until this point, I’ve run MySQL 5.6.21 with this below configuration file and Linux adjusts previously mentioned:

--innodb_io_capacity=2000
--innodb_io_capacity_max=2500
--innodb_lru_scan_depth=2500
--innodb_flush_log_at_trx_commit=2
--innodb_flush_neighbors=0
--innodb_log_group_home_dir=/var/mysql/logs
--innodb_log_files_in_group=2
--innodb_log_file_size=1024M
--innodb_buffer_pool_size=72G
--innodb_doublewrite=1
--innodb_buffer_pool_instances=10
--innodb_log_buffer_size=32M
--innodb_file_per_table=1
--innodb_file_format=BARRACUDA
--innodb_flush_method=O_DSYNC
--innodb_open_files=900000
--innodb_read_io_threads=16
--innodb_write_io_threads=16
--innodb_support_xa=0

The final comment is that when running MySQL on SSD, 5.5 is the worst case and the 5.6 make the kid a little bit better at this moment when the charts related with I/O started getting more colored in Ganglia and in the Enterprise Manager which is the storage’s monitoring center. Speaking about 5.6 yet is good to have attention on two variables which are innodb_lru_scan_depth which will handle all the I/O per Buffer Pool instance and innodb_flush_neighbors which will handle the way flush pages are done on SSD. I believe that soon I’ll have more to post here considering performance tuning related things.

AddThis Social Bookmark Button

Fast Index Creation really matters

julho 23rd, 2014 Bianchi Posted in MySQL A&D, MySQL Tuning No Comments »

In one of the recent projects I’ve got involved, I had a situation where I started reviewing the data model so as to find any additional or unnecessary indexes on tables. The scenario is that one where the database was recently moved from MyISAM to InnoDB Storage Engine. So, considering that there are some critical queries that are hanging inside the InnoDB for a long time, I decided to remove some of the redundant indexes from some tables and then re-validate the queries with less indexes options in terms of optimization. To remove indexes, I had an option to do a simple ALTER TABLE … DROP INDEX and use pt-online-schema-change, this last gives a possibility to keep the operation running within the process of removing indexes. This is not a typical operation if we consider MySQL version prior to the 5.5 (or 5.1 + InnoDB Plugin), taking into account that the very first table thought to be a target of the ALTER TABLE was residing in a MySQL 5.0 (traumatic period) and the same table be one of the biggest tables that exists in the schema – 784GB. Now this table resides in the MySQL 5.5, but the MySQL 5.0’s trauma remains on the team members mind.

All the operation was executed very well in terms of the ALTER TABLE execution, it was faster and painless. Often, folks from the customer side wants to be comfortable with the solution about to be applied to avoid them to stay asleep at nights or weekends, I like that as well due to the audit process implied. Btw, the ALTER TABLE that dropped some indexes was executed on the MASTER server and was replicated to 10 slave servers and everything is running well. Avoid problems in modifying tables on slaves. At least on 5.5, I found a problem that was published at bugs.mysql.com and you can check that here (http://bugs.mysql.com/bug.php?id=60784).

So, all the comments done, the intention of this post is to demo the importance and how faster is to CREATE or DROP a secondary index on InnoDB tables and I wanna compare both versions, 5.0 and 5.5 and as I am planning to migrate all my customers to 5.6, I will provide soon tests considering this last one’s time as well.

Test Scenario

The very first thing to execute on this test to benchmark index creation and exclusion, we need to think of a table with large data and some complicate columns configured with complex data types such as large VARCHAR, TEXT and BLOB. It’s going to give us a complex scenario to deal with indexes on new and old MySQL versions. I would like to caught up your attention that, to add or remove secondary indexes on InnoDB tables from 5.1 + InnoDB Plugin/5.5, a table copy-alter-rename is not needed due to the Fast Index Creation, the opposite of what happens when it’s needed to alter a clustered index column. This is the focus of the tests here and the versions 5.0, 5.1, 5.5, and 5.6 will be part of this small benchmark. I’ve just raised a Vagrant VM with an automating script to setup all the MySQL versions as exhibited below:

[root@mysql56 ~]# find / -name mysqld
/mysql50/bin/mysqld
/mysql56/bin/mysqld
/mysql51/bin/mysqld
/mysql55/bin/mysqld

[root@mysql56 ~]# ls -lh / | grep mysql
drwxr-xr-x  15 mysql   mysql   4.0K May 31 01:12 mysql50
drwxr-xr-x  13 mysql   mysql   4.0K May 31 00:35 mysql51
drwxr-xr-x  13 mysql   mysql   4.0K May 31 01:15 mysql55
drwxr-xr-x  13 mysql   mysql   4.0K May 31 00:16 mysql56

[root@mysql56 bin]# /etc/init.d/mysql50 status
MySQL is not running                                       [FAILED]
[root@mysql56 bin]# /etc/init.d/mysql51 status
MySQL is not running, but PID file exists                  [FAILED]
[root@mysql56 bin]# /etc/init.d/mysql55 status
MySQL is not running, but PID file exists                  [FAILED]
[root@mysql56 bin]# /etc/init.d/mysql56 status
MySQL is not running, but PID file exists                  [FAILED]

[root@mysql56 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      7.7G  7.3G     0 100% /
/dev/sda1              99M   20M   75M  21% /boot
tmpfs                 502M     0  502M   0% /dev/shm
/dev/sdb1             7.9G  147M  7.4G   2% /mysql50/datadir
/dev/sdc1             7.9G  147M  7.4G   2% /mysql51/datadir
/dev/sdd1             7.9G  147M  7.4G   2% /mysql55/datadir
/dev/sde1             7.9G  147M  7.4G   2% /mysql56/datadir

1. Create a complex test table and a stored procedure to populate the table:

I’m not sure if this below table is complex enough if we compare this with that large table that we can encounter on company’s databases. Generally, when tables are target of INSERTs, DELETEs and UPDATEs, it’s needed to consider some other factors which the main ones are data fragmentation due to the random access to InnoDB pages and a lack of accuracy related with table statistics – both aspects can be solved with OPTIMIZE TABLE. BTW, the time can be close to that one that will be available here on tests. Let’s create the table and the procedure which will load data in the table.

#
#: creating the database and a table to 
#: accommodate data for the tests
#
mysql> CREATE DATABASE wb;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE TABLE wb.tbl01 (
    -> a bigint not null auto_increment primary key,
    -> b varchar(144) not null,
    -> c char(144) not null,
    -> d longblob,
    -> e longtext
    -> ) ENGINE=InnoDB;
Query OK, 0 rows affected (1.68 sec)
#
#: stored procedure to load data in the table
#
mysql> DELIMITER //
mysql> CREATE PROCEDURE wb.proc01(IN p_num BIGINT)
 -> BEGIN
 -> SET @u_var = 0;
 -> WHILE @u_var <= p_num DO
 -> INSERT INTO wb.tbl01
 -> SET a=@u_var,
 -> b=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),1),
 -> c=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),1),
 -> d=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),2),
 -> e=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),2);
 -> SET @u_var = @u_var+1;
 -> END WHILE;
 -> END //
Query OK, 0 rows affected (0.00 sec)</pre>

#
#: this is the resultant data after running the procedure above
#
mysql> select * from wb.tbl01 limit 10\G
*************************** 1. row ***************************
a: 1
b: 23072014070734c4ca4238a0b923820dcc509a6f75849b
c: 23072014070734c4ca4238a0b923820dcc509a6f75849b
d: 23072014070734c4ca4238a0b923820dcc509a6f75849b23072014070734c4ca4238a0b923820dcc509a6f75849b
e: 23072014070734c4ca4238a0b923820dcc509a6f75849b23072014070734c4ca4238a0b923820dcc509a6f75849b
*************************** 2. row ***************************
a: 2
b: 23072014070734c81e728d9d4c2f636f067f89cc14862c
c: 23072014070734c81e728d9d4c2f636f067f89cc14862c
d: 23072014070734c81e728d9d4c2f636f067f89cc14862c23072014070734c81e728d9d4c2f636f067f89cc14862c
e: 23072014070734c81e728d9d4c2f636f067f89cc14862c23072014070734c81e728d9d4c2f636f067f89cc14862c
*************************** 3. row ***************************
a: 3
b: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
c: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
d: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf323072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
e: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf323072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
*************************** 4. row ***************************
a: 4
b: 23072014070734a87ff679a2f3e71d9181a67b7542122c
c: 23072014070734a87ff679a2f3e71d9181a67b7542122c
d: 23072014070734a87ff679a2f3e71d9181a67b7542122c23072014070734a87ff679a2f3e71d9181a67b7542122c
e: 23072014070734a87ff679a2f3e71d9181a67b7542122c23072014070734a87ff679a2f3e71d9181a67b7542122c
*************************** 5. row ***************************
a: 5
b: 23072014070734e4da3b7fbbce2345d7772b0674a318d5
c: 23072014070734e4da3b7fbbce2345d7772b0674a318d5
d: 23072014070734e4da3b7fbbce2345d7772b0674a318d523072014070734e4da3b7fbbce2345d7772b0674a318d5
e: 23072014070734e4da3b7fbbce2345d7772b0674a318d523072014070734e4da3b7fbbce2345d7772b0674a318d5
*************************** 6. row ***************************
a: 6
b: 230720140707341679091c5a880faf6fb5e6087eb1b2dc
c: 230720140707341679091c5a880faf6fb5e6087eb1b2dc
d: 230720140707341679091c5a880faf6fb5e6087eb1b2dc230720140707341679091c5a880faf6fb5e6087eb1b2dc
e: 230720140707341679091c5a880faf6fb5e6087eb1b2dc230720140707341679091c5a880faf6fb5e6087eb1b2dc
*************************** 7. row ***************************
a: 7
b: 230720140707348f14e45fceea167a5a36dedd4bea2543
c: 230720140707348f14e45fceea167a5a36dedd4bea2543
d: 230720140707348f14e45fceea167a5a36dedd4bea2543230720140707348f14e45fceea167a5a36dedd4bea2543
e: 230720140707348f14e45fceea167a5a36dedd4bea2543230720140707348f14e45fceea167a5a36dedd4bea2543
*************************** 8. row ***************************
a: 8
b: 23072014070734c9f0f895fb98ab9159f51fd0297e236d
c: 23072014070734c9f0f895fb98ab9159f51fd0297e236d
d: 23072014070734c9f0f895fb98ab9159f51fd0297e236d23072014070734c9f0f895fb98ab9159f51fd0297e236d
e: 23072014070734c9f0f895fb98ab9159f51fd0297e236d23072014070734c9f0f895fb98ab9159f51fd0297e236d
*************************** 9. row ***************************
a: 9
b: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
c: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
d: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad262307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
e: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad262307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
*************************** 10. row ***************************
a: 10
b: 23072014070734d3d9446802a44259755d38e6d163e820
c: 23072014070734d3d9446802a44259755d38e6d163e820
d: 23072014070734d3d9446802a44259755d38e6d163e82023072014070734d3d9446802a44259755d38e6d163e820
e: 23072014070734d3d9446802a44259755d38e6d163e82023072014070734d3d9446802a44259755d38e6d163e820
10 rows in set (0.00 sec)

After setting up the database, table and the stored procedure, start the procedure to load data into the table we’ll be using to benchmark fast index creation and drop among MySQL versions.

mysql> call wb.proc01(1000000);
Query OK, 0 rows affected (7 min 31.18 sec)

mysql> select count(*) from wb.tbl01;
+----------+
| count(*) |
+----------+
|  1000000 |
+----------+
1 row in set (1.72 sec)

2. Create some secondary indexes:

Let’s use the column c as the column to be indexed creating an index called i.

##############################################
#
#: creating a secondary index on MySQL 5.0
#: Server version: 5.0.91 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 1000000 rows affected (7 min 33.84 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.0
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 1000000 rows affected (5 min 8.14 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

Just registering here that when I started the same procedure to create objects on MySQL 5.1, I found this error message when calling the procedure:

ERROR 1436 (HY000): Thread stack overrun:  8264 bytes used of a 131072 byte stack, and 128000 bytes needed.  Use 'mysqld -O thread_stack=#' to specify a bigger stack.

I adjusted the environment variable thread_stack to 192K (it’s max value) and restarted mysqld.

##############################################
#
#: creating a secondary index on MySQL 5.1.70
#: Server version: 5.1.70 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 1000000 rows affected (7 min 10.73 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.1.70
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 1000000 rows affected (5 min 12.24 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
##############################################
#
#: creating a secondary index on MySQL 5.5.33
#: Server version: 5.5.33 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 0 rows affected (1 min 21.68 sec)
Records: 0  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.5.33
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 0 rows affected (0.46 sec)
Records: 0  Duplicates: 0  Warnings: 0

###############################################
#: creating a secondary index on MySQL 5.6.17
#: Server version: 5.6.17 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 0 rows affected (1 min 39.08 sec)
Records: 0  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.6.17
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 0 rows affected (0.42 sec)
Records: 0  Duplicates: 0  Warnings: 0

The tests conclusion is that new versions have improved along the time and Fast Index Creation really matters when one is taking with secondary indexes. It enforces the concept of the logic/strategy behind InnoDB that uses Primary Key lookups and then, you must have a PK on all the tables – if you don’t state one, InnoDB will elect one of the columns to a PK or will internally create a ROWID column. Secondary indexes can be changed anytime with faster response from the MySQL, making it easier to add/remove in many cases (imagine you’re working to redesign the database model in terms of indexes).

Another point to expose here is that the on disk data is smaller on newer versions. 5.0 and 5.1 had the same behavior regarding the data size on disk, but, on 5.5 and 5.6 the same amount of data reflected a different global size:

[root@mysql56 mysql56]# df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              99M   20M   75M  21% /boot
tmpfs                 502M     0  502M   0% /dev/shm
/dev/sdb1             7.9G  1.3G  6.3G  17% /mysql50/datadir
/dev/sdc1             7.9G  1.3G  6.3G  17% /mysql51/datadir
/dev/sdd1             7.9G  744M  6.8G  10% /mysql55/datadir
/dev/sde1             7.9G  874M  6.7G  12% /mysql56/datadir

Index Creation

BLOG_INDEX_CREATION

Dropping Index

BLOG_INDEX_DROPPING

AddThis Social Bookmark Button

MySQL 5.5.X – Sort aborted

dezembro 26th, 2013 Bianchi Posted in MySQL A&D, MySQL Manutenção, MySQL Tuning No Comments »

This morning I started investigating a file sort problem that is happening with a report server. Actually, what caught more my attention was what is really behind of the error message that is appearing many time along MySQL report server’s error log. Yes, this particular server is a slave server used just for extract reports on business data and because that, this kind if server generally is prepared to have good response to read queries which use aggregations COUNT(), SUM(), AVG() and consequently group data by some spacial column. BTW, all the data will be more in memory than on disk and all that story.

But, what is behind the message “[Warning] Sort aborted, host:” ? So, researching for the same case on the internet, I found that some of the problems reported by MySQL on log_error is around these possibilites:

Insufficient disk space in tmpdir prevented tmpfile from being created

This one is easier to check, just df -h /tmp will give you the notice about all the avail space you have at this point at the temporary dir. So, a good question here is, what do I research for when get the notice that there is enough space in /tmp dir? This is the time to get the query what is causing the issue and re execute it, monitoring the /tmp dir and checking if it’s getting full.

Somebody ran KILL in the middle of a filesort

At this point, I agree with Suresh Kuna when he said that “as a DBA, we can’t do much with the first point apart from informing customer to check at the application side for connection drop outs”. The query can be stopped by a error reading packages, a transactions timeout or even a replication slave timeout. Many variables get involved when analysing this kind of problem, but, mainly, problems regarding a user that give up the report’s query in the processing midst.

The server was shutdown while some queries were sorting

When the error is reported to the error log, probably you have an opportunity to observe the timestamp associated with that and then, go through the details on MySQL Server shutdown reading along the error log entries.

A transaction got rolled back or aborted due to lock wait timeout or deadlock

At this point we can consider many things but the main ones are the checking of ENGINE INNODB STATUS which will report all the deadlocks in case you’re using InnoDB to handle database tables or even the log_error that will report message errors regrading occurred deadlocks with local server transaction or if the local server act as a slave, the slave, the message can report that the deadlock is happening with a replicated transaction – innodb_lock_wait_timeout and slave_net_timeout can help with this. Another variables that can be used is slave_transaction_retries which reports if a replication slave SQL thread fails to execute a transaction because of an InnoDB deadlock or because the transaction’s execution time exceeded InnoDB’s innodb_lock_wait_timeout.

Unexpected errors, such as source table or even tmp table was corrupt.

In this case, depending on the size of the involved table (sometimes you won’t be able to know what is the target table just reading the log_error), a simple CHECK TABLE can be effective in get to know if the table has corrupted pages or another errors.

Processing of a subquery failed which was also sorting

This is a classic case in majority of times. The good news is that when a subquery fails to be sorted out it’s a good case to review the value configured to sort_buffer_size. TAKE CARE, do not increase it without checking the reason and the * status variables to study what is the best value to fit the server’s requirements in file sorting process.

Considering what is explained on the online MySQL manual, just increase the sort_buffer_size value when the Sort_merge_passes is greater then zero.

Sheri Cabral wrote about that: http://www.pythian.com/blog/sort_buffer_size-and-knowing-why/

AddThis Social Bookmark Button

MySQL e o skip-name-resolve

novembro 28th, 2013 Bianchi Posted in MySQL Tuning No Comments »

Desde o lançamento da versão 5.5 do servidor de bancos de dados MySQL que eu venho verificando muitos problemas relacionados com a variável de resolução de nomes, skip-name-resolve. Para quem ainda não sabe ou está iniciando com o MySQL, toda vez que o servidor de bancos de dados recebe uma consulta, como por exemplo, aquela vinda do mysql client, o host de onde vem esta conexão é parte da verificação de autenticação do usuário. Além do nome de usuário e a senha, o usuário deverá ter permissão de originar uma conexão de um determinado host, assim configurado através da criação explícita do usuário através do comando CREATE USER, ou, dependendo das configuração de SQL_MODE, usuários podem ser criados diretamente através do comando GRANT, este que permite que você também dê as devidas permissões e configure host e senha para o usuário.

Voltando então ao momento da conexão, considerando que o host é também verificado, na versão 5.5 uma nova feature foi apresentada, sendo adicionada para que hosts passassem permanecer em memória cache. Não só isso, como o MySQL verifica a existência do host vinculado a uma conexão através da coluna host da tabela mysql.user, quando um host não existe, o MySQL tenta resolver o host através de um DNS Lookup. Primeiro ele resolve o IP em um nome de host e assim ele continua utilizando o IP, mas guarda no cache no cache o nome do host. Na resolução do IP em nome, existe uma verificação adicional: verificar se o IP que chegou no MySQL é o mesmo IP configurado por trás do nome da máquina configurado no DNS. Parece muito bom, mas, se na sua empresa você não utiliza um DNS ou mesmo, só tem endereços de IP na coluna hosts da tabela mysql.user, talvez não seja necessário gerar um overhead para o servidor e também, um pouco de dor de cabeça, pois, dependendo do tipo de monitoramento que você tem internamente, uma simples linha de um IP esse ou aquele adicionada ao error log, pode disparar um chamado desnecessário no meio da noite – olha, isso acontece!!

Vantagens e desvantagens, se é necessário que um usuário se conecte do endereço BOX01 onde um dos requisitos é criar um usuário “foo”@”box01”, tudo bem, vale ter a configuração. um outro ponto bastante interessante é configurar o MySQL para que, caso um determinado usuário tentar conexão por x vezes e não conseguir se logar no MySQL por conta de digitação errada ou mesmo esquecimento da senha, ele pode ser bloqueado (ninguém sabe quando é uma pessoa ou um robô tentando acesso). Isso poderá ser realizado através da variável max_connect_errors, que adicionada ao arquivo de configuração, com um valor 3, por exemplo, dará 3 oportunidades de tentativa de login. Para desbloquear os hosts bloqueados, FLUSH HOSTS.

Com a opção habilitada, o MySQL, além de fazer essa verificação de IP (se ele é ele mesmo!!), ainda será utilizado um mecanismo de memória para adicionar ao cache os hosts logo no primeiro acesso válido, sendo estes hosts mantidos em memória até o espaço para esta lista de hosts se esgotar. Nesse momento, o algoritmo LRU (Least Recently Used) é acionado o host menos acessado é despejado da memória (processo conhecido como eviction).  Todo esse processo também envolve estruturas como mutexesthreads e locks.

Agora, caso os usuários que utilizam o MySQL possam ser criados considerando o IP de onde a conexão é gerada ou a string localhost, podemos desabilitar a resolução de nomes com a variável –skip-name-resolve, adicionada à sessão [mysqld] do arquivo de configuração do MySQL e reinicie o mysqld.

[mysqld]
max_connect_errors=3 # três tentativas de autenticação
#skip-name-resolve   # desabilita o DNS Lookup, linha comentada

Interessante ressaltar que caso seja encontrado na coluna host das tabelas privilégio (user, db, host, tables_priv, columns_priv e procs_priv) um valor diferente de um IP ou a string localhost, não é aconselhável que a resolução de nomes seja habilitada. Caso contrário, caso exista somente IPs e a string localhost, –skip-name-resolve poderá ser desabilitado. use a consulta abaixo para verificar a existência de possíveis valores na coluna host nas tabelas de privilégios do MySQL (também conhecidas como grant tables):

mysql> select a.host as `mysql.user`, b.host as `mysql.db`, c.host as `mysql.tables_priv`, d.host as `mysql.columns_pric` FROM mysql.user as a left join mysql.db as b on a.user=b.user left join mysql.tables_priv as c on a.user=c.user left join mysql.columns_priv as d on a.user=d.user;
+-----------------------+----------+-------------------+--------------------+
| mysql.user            | mysql.db | mysql.tables_priv | mysql.columns_pric |
+-----------------------+----------+-------------------+--------------------+
| localhost             | %        | NULL              | NULL               |
| localhost.localdomain | %        | NULL              | NULL               |
| localhost             | %        | NULL              | NULL               |
| localhost.localdomain | %        | NULL              | NULL               |
| 127.0.0.1             | NULL     | NULL              | NULL               |
| ::1                   | NULL     | NULL              | NULL               |
| localhost             | NULL     | NULL              | NULL               |
| localhost             | NULL     | NULL              | NULL               |
| localhost.localdomain | NULL     | NULL              | NULL               |
+-----------------------+----------+-------------------+--------------------+
9 rows in set (0.01 sec)

No resultado da consulta acima, perceba que há muitos valores NULL em tabelas mais à direita. Esse comportamento denota que não há usuários com permissões em de acesso restrito somente à bancos de dados, à tabelas de bancos de dados ou somente à colunas de determinadas tabelas de bancos de dados específicos.

Quando o recurso está habilitado e o MySQL não consegue fazer o lookup reverso de conexões, um evento de Warning é adicionado ao log de erro – verifique a variável error_log para saber aonde o arquivo de log foi criado – onde é descrito que não foi possível resolver determinado IP/DNS de uma conexão. O erro que será adicionado ao arquivo de log de erro do MySQL é algo como a linha abaixo:

[Warning] IP address '#.#.#.#' could not be resolved: Name or service not known

Interessante saber exatamente o que cada evento adicionado ao log de erros do MySQL representa para que seu sistema continue rodando sem problemas de downtime e ter a possibilidade de ser mais proativo com os possíveis problemas que o MySQL e os seus bancos de dados possam apresentar no futuro.

Esse foi um post curto, mais teórico que prático, mas, a boa notícia e que vou tentar voltar em breve!!

Happy MySQL’ing!!

AddThis Social Bookmark Button

InnoDB Status Output – Buffer Pool and Spin Rounds

outubro 19th, 2013 Bianchi Posted in MySQL Tuning No Comments »

InnoDB has a good source of information about its status which can be requested every time you need to know “what’s up” with that in your environment. The SHOW ENGINE INNODB STATUS will inform you the last x seconds of its operation, leveraging system or database administrator with the best – as possible – position about what is happening with data pages which are being manipulated in a sense of maintain those in the Buffer Pool as more as possible.

$ mysql -u  -p -e 'SHOW ENGINE INNODB STATUS\G' > file

Buffer Pool is the privileged main memory area where InnoDB will maintain all the last recently used data pages, regardless of the page’s size, in rotation, based on LRU algorithm. This area will serve well for SELECT, UPDATE and DELETE, SQL commands which will use more data from memory than that on disk. Pages will be cycling between young and old status, more used and less used, respectively…

----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 79121448960; in additional pool allocated 0
Dictionary memory allocated 776119
Buffer pool size   4718590
Free buffers       4682063
Database pages     36395
Old database pages 13627
Modified db pages  23223
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 672, not young 0
2.90 youngs/s, 0.00 non-youngs/s
Pages read 36066, created 329, written 323
75.09 reads/s, 1.50 creates/s, 0.00 writes/s
Buffer pool hit rate 985 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 5.00/s
LRU len: 36395, unzip_LRU len: 0
I/O sum[0]:cur[80], unzip sum[0]:cur[0]

As you can see above, the total allocated main memory for Buffer Pool is 79121448960, with some space for the InnoDB’s dictionary, the actual size of the buffer pool, the amount of space, that is, 4682063, what is 292629 in terms of data pages, considering 16kb pages, the amount of old pages the remains in the buffer pool and all the modified or dirty pages – those that were modified by an UPDATE, for example, and haven’t flushed to disk yet. Pending reads and writes indicates the amount of pages which were written to the buffer pool and haven’t flushed yet as the flush list and the amount in terms of pages.

A good point that called my attention was the read ahead and evictions noticed by the output above. “The read ahead request is an I/O request to prefetch multiple pages in the buffer pool asynchronously, in anticipation that these pages will be needed soon”. This will tell us how many pages were copied into the buffer pool and were evicted without being accessed anytime. I think it costs a little bit to get more pages than necessary into the buffer pool as the mechanism must discard pages which are not being accessed, even being this process asynchronous.

Recently, I’ve got very curious about the spin rounds behavior and I realized that if you have many transactions in sleep state inside InnoDB, waiting to be executed, perhaps it may be a spin round problem. The output of SHOW ENGINE INNODB STATUS will show you that…

----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 13701
--Thread 140549419812608 has waited at log0log.ic line 321 for 0.00 seconds the semaphore:
Mutex at 0x7c10f4b8 created file log0log.cc line 737, lock var 1
waiters flag 1
OS WAIT ARRAY INFO: signal count 15206
Mutex spin waits 607605, rounds 3114855, OS waits 8383
RW-shared spins 9396, rounds 101453, OS waits 1626
RW-excl spins 6569, rounds 137971, OS waits 3191
Spin rounds per wait: 5.13 mutex, 10.80 RW-shared, 21.00 RW-excl

What does it mean, so?

• Mutex spin waits 607605 is the number of times a thread tried to get a mutex and it wasn’t available, so it waited in a spin-wait;
• rounds 3114855 is the number of times threads looped in the spin-wait cycle, checking the mutex.
• OS waits 8383 is the number of times the thread gave up spin-waiting and went to sleep state instead.

In the SEMAPHORES output above we can observe a case of a fine tuning is needed to avoid context switches. It costs lots of computational resources to maintain information about the actual executing status to restore it as soon as possible. The RW-shared is high, but this is not the real problem. The real problem is happening around RW-excl which acquires locks and make the amount of rounds higher even on the OS level. The final result, 21 waits in the last five seconds.

I will comment more about it soon, cheers!

AddThis Social Bookmark Button

MySQL 5.6 Thread Pool

setembro 30th, 2013 Bianchi Posted in MySQL A&D, MySQL Tuning No Comments »

Tendo em vista o problema já discutido aqui neste blog com relação à escala de conexão de usuários versus criação de threads no MySQL versus sistema operacional – no caso, um CentOS 6.0 – decidi recentemente parar para dar uma lida no manual do MySQL e verificar nos mínimos detalhes o que a feature promete. Já havia feito alguns testes há algum tempo atrás, mas, recordar é viver.

O Thread Pool, plugin que integra a versão Enterprise do MySQL oferecida pela Oracle, veio com a intenção de aumentar o poder de escala quando o assunto é quantidade de usuários. Por mais que eu considere que é melhor você resolver consultas mais rapidamente com boa performance do que ficar acumulando usuários no sistema de gerenciamento de bancos de dados e assim, causar um processamento muito mais acentuado por via da criação de threads, ainda assim temos que contar com os long-running-statements que podem tomar grande parte dos recursos do host de servidor de bancos de dados.

A intenção do plugin é fazer com que o MySQL escala mais com mais quantidade de conexões realizadas. Segundo o que diz o manual, quanto mais conexões, mais estável e mais rápido será a resposta do engine (mysqld). Antes então do que mais interessa, alguns pontos de atenção:

  • O Thread Pool não vem habilitado por padrão, você precisa configurar a leitura do plugin;
  • Suas variáveis de ambiente somente serão carregadas caso o plugin seja carregado;

Após habiltar o plugin, verifique as variáveis de ambiente e entenda o que cada uma delas faz.

Saiba mais através do manual. Não vou tratar dos detalhes pois, minha ansiedade aqui é exibir que o recursos realmente tem um resultado muito bom e com isso, exibo abaixo os resultados de um pequeno benchmark com o mysqlslap…

[root@threadpool ~]# mysqlslap --user=root --password=123456 --auto-generate-sql --concurrency=100,150,200,250,300 --number-of-queries=2000
Warning: Using a password on the command line interface can be insecure.
Benchmark
Average number of seconds to run all queries: 2.675 seconds
Minimum number of seconds to run all queries: 2.675 seconds
Maximum number of seconds to run all queries: 2.675 seconds
Number of clients running queries: 100
Average number of queries per client: 20

Benchmark
Average number of seconds to run all queries: 2.224 seconds
Minimum number of seconds to run all queries: 2.224 seconds
Maximum number of seconds to run all queries: 2.224 seconds
Number of clients running queries: 150
Average number of queries per client: 13

Benchmark
Average number of seconds to run all queries: 2.363 seconds
Minimum number of seconds to run all queries: 2.363 seconds
Maximum number of seconds to run all queries: 2.363 seconds
Number of clients running queries: 200
Average number of queries per client: 10

Benchmark
Average number of seconds to run all queries: 2.035 seconds
Minimum number of seconds to run all queries: 2.035 seconds
Maximum number of seconds to run all queries: 2.035 seconds
Number of clients running queries: 250
Average number of queries per client: 8

Benchmark
Average number of seconds to run all queries: 1.984 seconds
Minimum number of seconds to run all queries: 1.984 seconds
Maximum number of seconds to run all queries: 1.984 seconds
Number of clients running queries: 300
Average number of queries per client: 6

 

O próximo passo é verificar a quantidade de consultas estagnadas (stalled) através da tabela INFORMATION_SCHEMA.TP_THREAD_GROUP_STATS, que somente estará disponível caso o servidor esteja utilizando o Thread Pool plugin.

mysql> call test.stalledThreads;
+-------------------------------------------------------+
| SUM(STALLED_QUERIES_EXECUTED) / SUM(QUERIES_EXECUTED) |
+-------------------------------------------------------+
| 0.0000                                                |
+-------------------------------------------------------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)

Sem stalled queries, em breve vou postar o Thread Pool in action, até!

AddThis Social Bookmark Button

Estressando o MySQL com o mysqlslap

junho 15th, 2012 Bianchi Posted in MySQL A&D, MySQL Manutenção, MySQL Tuning 2 Comments »

Não é de hoje que é necessário efetuar vários testes antes de colocar um servidor em produção e para isso, as vezes os testes que a turma de desenvolvimento elabora não são os melhores na visão do administrador de bancos de dados. Na verdade, os dois times precisam estar juntos e alinhados para a realização de tal tarefa para que nada escape aos olhos e ao entendimento de ambos os pontos de vista, tanto da aplicação quanto do banco de dados, sendo que, testes de estresse ou ainda, os benchmarks, são um fator determinante para que um produto para ser eleito como solução ou não.

Nessa semana tivemos um interação interessante com um cliente no Brasil que precisou ter certeza de que um servidor de bancos de dados MySQL poderia entrar em produção para atender a uma grande demanda e por isso, nos chamou, para rever toda a configuração, além de corrigir métricas de performance, revisar discos, memória e poder de processamento. É isso, após o trabalho utilizamos o “mysqlslap” que é uma suite de benchmark nativa do MySQL, disponibilizada juntamente com vários programas clientes e não clientes no momento da instalação do servidor de bancos de dados mais popular do mundo. Várias são as opções que podem ser utilizadas com o mysqlslap que ao ser utilizado a primeira vez, cria uma base de dados para administrar os seus próprios metadados.

O que quero mostrar aqui é que, após executar uma auditoria e um bom tuning na instância de MySQL do cliente, que roda em Red Hat 6, rodamos alguns scripts personalizados, criados pela WBConsultinga para otimizar dados em páginas de dados e atualizar estatísticas de objetos, iniciamos os testes com o mysqlslap, primeiro para verificar se teríamos problemas com o número de conexão simultâneas de usuários de 3000 que o cliente requisitou para o sistema.

Executamos então o primeiro teste com 3000 clientes disparando 1000 consultas em conexão simultânea…

[root@mysqlsrv101 ~]# mysqlslap --user=root --password=XXX --auto-generate-sql --concurrency=3000 --number-of-queries=1000
Benchmark
Average number of seconds to run all queries: 33.098 seconds
Minimum number of seconds to run all queries: 33.098 seconds
Maximum number of seconds to run all queries: 33.098 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

O tempo de uma iteração único poderá ser alto se imaginarmos que temos consultas ad hoc. Mas, para isso, o mysqlslap tem uma opção que permite controlar quantas vezes você deseja repetir aquela mesma iteração (-i ou –itereations). Executamos  -i 5 e assim, notamos que os ajustes de caches e buffers estão trabalhando bem…

[root@mysqlsrv101 ~]# mysqlslap --user=root --password=XXX --auto-generate-sql --concurrency=3000 --auto-generate-sql-write-number=100 -i 5
Benchmark
Average number of seconds to run all queries: 19.387 seconds
Minimum number of seconds to run all queries: 17.967 seconds
Maximum number of seconds to run all queries: 22.998 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

Tivemos então os tempos médio (average) mínimo (minimum) mais baixos que executando consultas ad hoc. Consultando as variáveis de status do MySQL, percebemos que muita informação foi agregada às estruturas de memória, tanto para o InnoDB Buffer Pool quanto para o MyISAM Key Buffer.

mysql> show status like 'Innodb_buffer_pool%';
+---------------------------------------+-----------+
| Variable_name                         | Value     |
+---------------------------------------+-----------+
| Innodb_buffer_pool_pages_data         | 5638      |
| Innodb_buffer_pool_pages_dirty        | 0         |
| Innodb_buffer_pool_pages_flushed      | 13895     |
| Innodb_buffer_pool_pages_free         | 518648    |
| Innodb_buffer_pool_pages_misc         | 1         |
| Innodb_buffer_pool_pages_total        | 524287    |
| Innodb_buffer_pool_read_ahead_rnd     | 0         |
| Innodb_buffer_pool_read_ahead         | 0         |
| Innodb_buffer_pool_read_ahead_evicted | 0         |
| Innodb_buffer_pool_read_requests      | 764868549 |
| Innodb_buffer_pool_reads              | 1865      |
| Innodb_buffer_pool_wait_free          | 0         |
| Innodb_buffer_pool_write_requests     | 665820    |
+---------------------------------------+-----------+
13 rows in set (0.01 sec)

mysql> show status like 'Key_%';
+------------------------+---------+
| Variable_name          | Value   |
+------------------------+---------+
| Key_blocks_not_flushed | 1023    |
| Key_blocks_unused      | 17      |
| Key_blocks_used        | 2514736 |
| Key_read_requests      | 0       |
| Key_reads              | 2876589 |
| Key_write_requests     | 4566867 |
| Key_writes             | 4567890 |
+------------------------+---------+
7 rows in set (0.00 sec)

Finalmente, um teste de evolução de conexões simultâneas, inciando em 500, indo a 1000, 1500 e finalmente para 3000:

[root@mysqlsrv101 ~]# mysqlslap --user=root --password=XXX --auto-generate-sql --concurrency=500,1000,1500,3000 --number-of-queries=100
Benchmark
Average number of seconds to run all queries: 3.084 seconds
Minimum number of seconds to run all queries: 3.084 seconds
Maximum number of seconds to run all queries: 3.084 seconds
Number of clients running queries: 500
Average number of queries per client: 0

Benchmark
Average number of seconds to run all queries: 4.054 seconds
Minimum number of seconds to run all queries: 4.054 seconds
Maximum number of seconds to run all queries: 4.054 seconds
Number of clients running queries: 1000
Average number of queries per client: 0

Benchmark
Average number of seconds to run all queries: 6.993 seconds
Minimum number of seconds to run all queries: 6.993 seconds
Maximum number of seconds to run all queries: 6.993 seconds
Number of clients running queries: 1500
Average number of queries per client: 0

Benchmark
Average number of seconds to run all queries: 16.021 seconds
Minimum number of seconds to run all queries: 37.092 seconds
Maximum number of seconds to run all queries: 22.008 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

O resumo da utilização de recursos foi:

Máxima de CPU ao final dos testes: 49%
Máxima de Taxa de IO: 42%
Máxima de utilização de Memória: 70%
Máxima de Swap: 0%

Conseguimos acertar o número de conexões simultâneas que o cliente precisava ajustando as variáveis @@max_connections e @@max_user_connections de acordo com o que é necessário. O mysqlslap nos auxiliou para colocar o MySQL nos limites que o projeto requisitou e comprovar que o servidor de bancos de dados estava pronto para entrar em produção.

AddThis Social Bookmark Button

Verificando o tamanho de índices e dados!

junho 13th, 2012 Bianchi Posted in MySQL Manutenção, MySQL Tuning No Comments »

Muitas são as tarefas diárias (e também noturnas) que um DBA deverá realizar para trazer o seu servidor de bancos de dados em perfeito funcionamento, acessível pelos clientes e aplicações que acessam dados em um rítimo frenético como nos dias de hoje. Um dos pontos fortes que terá sempre grande atenção é quanto à performance de resposta à leituras e escritas que um servidor de bancos de dados poderá ter. O MySQL é um SGBD muito flexível, completamente customizável e tunável, com uma vasta gama de recursos disponíveis para a busca de melhorias no quesito performance.

Quando se trabalha com um banco de dados, cujas suas tabelas são controladas pelo Storage Engine padrão até a versão 5.1, o MyISAM, poderemos facilmente manter os dados de índices em memória por mais tempo possível, ajustando o MySQL para armazenar um quantidade X de dados destes índices em key_buffer, valor de memória atribuído à variável key_buffer_size. Quanto mais os dados estão em memória, menos buscas em disco (disk-seeks), menos overhead, menos processamento.

Para visualizar o tamanho dos índices de uma base de dados, consultamos a tabela TABLES do dicionário de dados, INFORMATION_SCHEMA do MySQL – note que a maioria das tabelas que compõem o dicionário de dados do MySQL é controlada pelo engine MEMORY, com excessão de algumas que são controladas pelo MyISAM. A seguinte consulta trará o tamanho total dos índices, localizados nos arquivos “.MYI” e o tamanho total dos dados, localizados nos arquivos “.MYD“:

Consulta Tamanho de Índices e Dados - utiliznado o INFORMATION_SCHEMA!

Consulta Tamanho de Índices e Dados

Como este é um teste e na minha instância de MySQL eu não tenho nenhum banco de dados que eu possa revelar informações, o tamanho dos índices e o tamanho dos dados estão zerados, mas, quando você rodar esta consulta em sua base de dados de produção, será possível recuperar informações diferentes de zero. A partir disso, teremos então o tamanho necessário de memória para começar a trabalhar as métricas de performance para o MyISAM, baseado no key_buffer_size. Dependendo do tamanho do seu hardware, trabalhar outros pontos do MySQL será mais que necessário para poder dar realmente poder de resposta para o servidor de bancos de dados para que as trocas sejam bem feitas.

Uma dica além das que já foram dadas, mantenha as estatísticas dos seus bancos de dados o mais atualizadas possível com ANALYZE e/ou OPTIMIZE.

Dúvidas? Aguardo seu comentário.

AddThis Social Bookmark Button

Agilizando a carga de dados e restore no MySQL

maio 13th, 2012 Bianchi Posted in MySQL A&D, MySQL Backup, MySQL Manutenção, MySQL Tuning No Comments »

Muitos são os amigos que escrevem perguntando como agilizar a carga de dados ou restore de um backup no MySQL. Realmente, dependendo do tamanho do seu hardware, configuração dos Storage Engines e variáveis per-client e design do seu banco de dados, esse processo poderá levar várias horas caso alguns cuidados não sejam tomados antes do início do processo. Há pouco tempo atrás, trabalhando em uma consultoria aonde o cliente precisava fazer uma carga diária de toda a movimentação nas contas onde todas as informações eram consistidas em arquivos texto, finalizamos a nossa prestação de serviços após termos desenvolvido um aplicativo que, além de fazer a carga dos dados e vários tratamentos em meio aos LOAD DATA INFILE, configuramos vários pontos do MySQL no runtime do aplicativo para que o processo fosse realmente “agilizado”.

São vários os pontos a serem observados:

  • Índices KEY, UNIQUE e FULLTEXT, para tabelas MyISAM;
  • Chaves estrangeiras ou foreign keys, para tabelas InnoDB;
  • o modo AUTOCOMMIT, para tabelas InnoDB.
Para os testes que faremos neste post, utilizaremos uma máquina virtual rodando o CentOS 6.0, com o MySQL 5.6.

[root@master ~]# mysqladmin -u root -p123456 version
mysqladmin Ver 8.42 Distrib 5.6.4-m7, for Linux on i686


Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version 5.6.4-m7-log
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/lib/mysql/mysql.sock
Uptime: 42 min 17 sec

Para desabilitar Índices KEY e UNIQUE, basta que você crie um select ou mesmo um script para percorrer tabela por tabela do seu modelo físico de bancos de dados para desabilitar os índices de cada uma delas. Gosto de fazer isso via mysql client com a opção -B maiúsculo, que executa a conexão com o mysqld em modo batch. Caso você tenha mais segurança em utilizar os recursos do MySQL em conjunto com o file system, você pode utilizar o SELECT … INTO OUTFILE.

# criamos as tabelas com índices KEY, ou seja, índices que são estruturas utilizadas para melhoria da performance na busca de dados
[root@master ~]# for i in {1..5}; do mysql -u root -p123456 test -e "create table tb$i(id$i int,key(id$i)) engine=myisam;"; done

# exibimos as tabelas criadas
[root@master ~]# mysql -u root -p123456 -e "show tables from test like 'tb%'"
+----------------------+
| Tables_in_test (tb%) |
+----------------------+
| tb1                  |
| tb2                  |
| tb3                  |
| tb4                  |
| tb5                  |
+----------------------+

# exibimos os índices criados nas colunas id das tabelas que acabamos de criar
[root@master ~]# mysql -u root -p123456 -e "select column_name, column_key from information_schema.columns where table_schema='test' and table_name like 'tb%'"
+-------------+------------+
| column_name | column_key |
+-------------+------------+
| id1         | MUL        |
| id2         | MUL        |
| id3         | MUL        |
| id4         | MUL        |
| id5         | MUL        |
+-------------+------------+

Agora que temos índices à desabilitar, podemos rodar um SELECT que nos devolverá os comandos ALTER TABLE necessários para desabilitar os índices das tabelas do banco de dados alvo da carga de dados.

# executando em modo batch
[root@master ~]# mysql -u root -p123456 -B -e "select concat('alter table ',table_name,' disable keys;') from information_schema.tables where table_schema='test'"
concat('alter table ',table_name,' disable_keys;')
alter table t1 disable keys;
alter table t2 disable keys;
alter table t3 disable keys;
alter table tb1 disable keys;
alter table tb2 disable keys;
alter table tb3 disable keys;
alter table tb4 disable keys;
alter table tb5 disable keys;

# executando com SELECT ... INTO OUFILE
[root@master ~]# mysql -u root -p123456 -e "select concat('alter table ',table_name,' disable keys;') into outfile '/tmp/alterDisableKey' from information_schema.tables where table_schema='test'"
[root@master ~]#

Considerando a segunda opção, volte ao mysql e execute o conteúdo do arquivo que foi salvo em /tmp
# executando o arquivo via source

[root@master ~]# mysql -u root -p123456 test -e "source /tmp/alterDisableKey;"
# confirmando que os índices foram desabilitados

mysql> show index from tb1\G
*************************** 1. row ***************************
Table: tb1
Non_unique: 1
Key_name: id1
Seq_in_index: 1
Column_name: id1
Collation: A
Cardinality: NULL
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment: disabled # desabilitado!
Index_comment:
1 row in set (0.00 sec)

Após realizar a carga de dados, ALTER TABLE <table_name> ENABLE KEYS!

Para que as foreign keys ou chaves estrangeiras em tabelas InnoDB tenham suas checagens desabilitadas (o processo de checagem de integridade referencial realmente atrasa o restore de dados) é um processo mais tranquilo que o anterior. Basta que você, na sua sessão, reconfigure o valor da variável de ambiente foreign_key_checks, como vemos logo abaixo:

mysql> show variables like 'foreign%';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| foreign_key_checks | ON    |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> SET FOREIGN_KEY_CHECKS=OFF;
Query OK, 0 rows affected (0.05 sec)

mysql> SET FOREIGN_KEY_CHECKS=0;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'foreign%';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| foreign_key_checks | OFF   |
+--------------------+-------+
1 row in set (0.00 sec)

O ponto final para finalizarmos este post, o AUTOCOMMIT! Primeiro, vamos entender o que esse cara faz e o que ele controla. Como o InnoDB é um Storage Engine transacional, a cada UPDATE, INSERT ou DELETE que é executado, o InnoDB cuida para enviar um COMMIT logo após tais consultas; isso, quando AUTOCOMMIT está configurado como 1 ou ON, que é o valor default. Como queremos fazer várias operações e somente ao final dar um COMMIT explícito, o que é feito pelo mysqldump com um arquivo de backup gerado com a opção -e, precisamos configurar o AUTOCOMMIT com o valor OFF ou 0.

# configurando autocomit no arquivo de configuração do MySQL, salev o mesmo e reinicie o MySQL
[root@master ~]# vim /etc/my.cnf

[mysqld]
autocommit=0

[root@master ~]# service mysql restart
Shutting down MySQL ... [ OK ]
Starting MySQL      ... [ OK ]

Pronto, agora o seu servidor de bancos de dados MySQL já está configurado para passar por processos de restore de forma mais rápida e também ser alvo de cargas de dados pesadas. Um adicional é, procure saber como funciona a variável bulk_insert_buffer_size, ela também ajudará neste quesito.

Até.

 

AddThis Social Bookmark Button

InnoDB e os Logs de Transação

março 18th, 2012 Bianchi Posted in MySQL Tuning No Comments »

Um dos desafios mais interessantes no MySQL atualmente é conseguir aplicar ao servidor de bancos de dados, o mysqld, uma boa configuração relacionada com o InnoDB Plugin. Digo uma configuração, pois, atingir um nível de melhoria de performance não é lá tão fácil, mesmo sabendo o significado e os possíveis valores de cada uma daquelas variáveis e tendo ciência que, se você altera a quantidade de arquivos de log, você precisa também alterar outros parâmetros para que realmente faça sentido todo o trabalho realizado. Pode parecer que é uma ação de configuração mais tranquila, mas, na verdade, não é.

Desde a sua concepção, o servidor de bancos de dados MySQL utiliza os conceitos “variáveis de ambiente” e “variáveis de status“, onde, respectivamente, as variáveis de ambiente são aquelas que receberão os possíveis valores para que um determinado comportamento seja desenhado (innodb_flush_method=O_DIRECT, por exemplo), enquanto que, as variáveis de status são registradores internos que são incrementados (em bytes ou número de ocorrência) para que determinadas ações possam ser realizadas com base em fatos reais.

Por que eu falei isso tudo até agora? Na verdade, nesse final de semana trabalhei com um novo cliente que rodava o MySQL 5.1.49 e migramos para o MySQL 5.5. Mesmo após todos os ajustes necessários para fazer com que o novo ambiente utilize as novas features do produto (clique aqui e veja o que mudou), notei problemas de lentidão e fui investigar. Primeiro, contei com o SMART para testar os discos, que são discos de 15K rodando muito bem. Analisei por uma hora e meia o comportamento de memória com o htop, vmstat e atop. Nada foi encontrado, mas, consegui perceber que havia uma movimentação de I/O muito grande quando o MySQL fazia o flush das páginas sujas do buffer para o disco.

O processo de flush no MySQL é bem parecido com este mesmo processo que ocorre no Oracle; que pode acontecer por 4 motivos: 1-) quantidade de páginas sujas no limite; 2-) um checkpoint aconteceu; 3-) um COMMIT foi enviado ou ainda, 4-) de acordo com um limite de tempo que determinado pela variável innodb_flush_log_at_trx_commit, isso no MySQL, obviamente;

Se formos analisar à grosso modo, “in broad terms”, o log de transação do InnoDB é o redo log do Oracle, já que a idéia é muito parecida. Por padrão, após qualquer instalação, seja ela no MS Windows ou em qualquer sabor de Linux/Unix, você notará que 2 arquivos de log foram criados no DATADIR do MySQL; dois arquivos que seguem o padrão de denominação ib_logfilex, onde x é um número sequencial. Tais arquivos, se tiver os seus tamanhos somados, não podem ultrapassar ou mesmo ter o valor da soma igual à 4GB ou 4096MB. Geralmente eu configurações que criam vários arquivos de 398MB, por exemplo.

[root@shaftserver01 mysql]# ls -lh | grep ib
-rw-rw----. 1 mysql mysql 1,0G Mar 18 11:34 ibdata1
-rw-rw----. 1 mysql mysql 380M Mar 18 11:34 ib_logfile0
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile1
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile2
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile3
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile4
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile5
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile6
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile7
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile8
-rw-rw----. 1 mysql mysql 380M Mar 18 11:33 ib_logfile9

O que te faz pensar que é necessário aumentar a quantidade de arquivos ou mesmo, o espaço disponível para os logs? Uma variável de status que pouca gente dá valor e sabe que ela existe para medir se há eficiência no processo de gravação de logs em disco. A eficiência é justamente nesse processo é justamente não ter que “esperar” para poder gravar logs nos arquivos e portanto, se um flush tiver que aguardar a liberação de espaço, é hora de acrescentar mais arquivos de log e também, disponibilizar mais espaço – um lembrete, os logs são primeiro armazenados no log buffer e depois, nas condições já citadas, são “flusheados” para disco, sendo gravados nos arquivos de maneira circular.

Não há no MySQL a possibilidade de arquivar logs antes que tais arquivos sejam reutilizados. No caso de você querer reconstruir os bancos a partir dos vetores de alterações que passaram pelos logs de transação do InnoDB, utilize o log binário, que, ele sim é o cara que poderá lhe ajudar com a tarefa de recriar o banco.

Voltando ao problema, se a variável de status Innodb_log_waits for maior que zero, considere reconfigurar o servidor MySQL:

mysql> show status like 'innodb_log_waits';
+------------------+-------+
| Variable_name | Value |
+------------------+-------+
| Innodb_log_waits | 3287 |
+------------------+-------+
1 row in set (0.00 sec)

Reconfigurando os logs, você poderá ter problemas ao reinicializar o MySQL. Sendo assim, você precisa excluir os logs de transação atuais e então, reiniciar o MySQL.

Até!

AddThis Social Bookmark Button

MySQL server has gone away

janeiro 18th, 2012 Bianchi Posted in MySQL Manutenção, MySQL Tuning 1 Comment »

Esta é uma mensagem de erro que acontece em muitos dos servidores de bancos de dados MySQL instalados aí pelo mundo e muitos dos usuários se vêem em uma situação que talvez não tenha solução. Há bastante tempo eu tenho respondido à fóruns de discussão que tratam do tema que é simples de resolver. A minha esperança é que o google indexe logo o título deste post para que tal informação de como se livrar da mensagem MySQL server has gone away em meio à operações de carga de dados, restore de um banco ou mesmo, em meio às interações do aplicação com o servidor MySQL.

Existe uma variável de ambiente no MySQL que controla este comportamento e também, o tamanho máximo dos pacotes que podem trafegar nas threads do MySQL. Você deve saber que cada thread é um conexão e você poderá ter informações sobre elas através do comando SHOW PROCESSLIST. O tamanho inicial é configurado na variável net_buffer_lentgh e o tamanho máximo é configurado em max_allowed_packet – esta variável que poderá ter um valor pequeno para o sistema que já se tornou grande.

Por exemplo, no início deste ano iniciei os trabalhos com um cliente no Brasil e precisamos na quele momento fazer a carga de grande de quantidade de dados em XML, que é um tipo de log que o cliente armazena para devidos fins. Ao iniciar as cargas com os arquivos de mais ou menos 300GB por vez, nos deparamos com o “MySQL server has gone away” ou em Português, “O MySQL foi embora”. Não é para mim uma mensagem e nem um comportamento novo, e foi somente fazer alguns ajustes no my.cnf, mais precisamente, na variável max_allowed_packet e tudo se resolveu:

[root@motoserver189 ~]# mysql -u root -p imoin_package < /files/log1765390.dump
ERROR 2006 (HY000) at line 59: MySQL server has gone away

# Alteramos o valor de max_allowed_packet para comportar pacotes maiores nas threads do MySQL

[mysqld]
max_allowed_packet=1024M

# Reiniciamos o servidor de bancos de dados MySQL para que as alterações passam a valer

[root@motoserver189 ~]# service mysql restart
Starting MySQL....................................... SUCCESS!

# Tentamos novamente e como agora vai dar tudo certo, embrulhamos o comando de restore no nohup que passa a execução do processo para o processo do Linux para se caso nossa conexão com o servidor seja fechada, o processo de restore não sofrerá nenhum impacto.

[root@motoserver189 ~]# nohup mysql -u root -p imoin_package < /files/log1765390.dump &
[1] 26303
[root@bd14 mysql]# nohup: appending output to `nohup.out'

Até…

AddThis Social Bookmark Button

Particionando o InnoDB Buffer Pool

dezembro 21st, 2011 Bianchi Posted in MySQL Manutenção, MySQL Tuning No Comments »

O título deste artigo é bastante sugestivo do ponto de vista de performance em bancos de dados. Geralmente, independente do tipo de particionamento, horizontal ou vertical, ele servirá para eliminar overheads em operações adicionais na escrita e/ou recuperação de dados. Com o InnoDB Buffer Pool, a partir da versão 5.5 não é diferente, pois, poderemos utilizar uma nova variável, aplicada somente ao InnoDB Plugin que nos possibilita dividir o Buffer Pool (área de memória que armazena índices e dados de tabelas InnoDB) em várias instâncias, sendo que cada uma das instâncias deverá ter no mínimo 1GB de espaço. Então, neste cenário, caso tenhamos um innodb_buffer_pool_size igual à 2GB, poderemos ter a variável innodb_buffer_pool_instances=2.

As principais vantagens de ser ter um Buffer Pool particionado é a possibilidade de que cada uma das instâncias poder controlar sua própria lista, que é baseada no algorítimo LRU (Least Recently Used), armazenam bem menos de dados que uma só instância, o que possibilita menos tempo para localizar um determinado dado na memória em meio à menos dados.

Uma boa analogia para a busca do entendimento é, imagine que você deixa o seu carro em um estacionamento de shopping que tem capacidade para 1000 carros. Você pára o seu carro e se você não tiver uma boa noção de espaço, quando voltar para buscá-lo poderá gastar vários minutos para achá-lo. Agora, imagine que este mesmo estacionamento agora conta com setores, algo como A1, A2, B1, B2 e etc. Neste cenário, quando você parar o carro, você saberá em qual setor o seu carro está parado, sendo que em cada setor, a lotação máxima é de somente 50 carros. Você procura seu carro em meio a um número muito menor do que se você tivesse que procurá-lo em meio à todos os carros.

As configurações (exemplo) podem ser como seguem:

[mysqld]
innodb_buffer_pool_size=16G
innodb_buffer_pool_instances=8

No exemplo acima, temos 8 instâncias do Buffer Pool, cada uma delas com 2GB de espaço para dados e índices de tabelas InnoDB. Podemos ainda monitorar o que está acontecendo com cada uma das instâncias de InnoDB Buffer Pool através do comando SHOW ENGINE INNODB STATUS, observando a seção “INDIVIDUAL BUFFER POOL INFO”:

----------------------
INDIVIDUAL BUFFER POOL INFO
----------------------
---BUFFER POOL 0
Buffer pool size 131071
Free buffers 20999
Database pages 109854
Old database pages 40564
Modified db pages 2
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 11, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 106393, created 3461, written 70472
0.00 reads/s, 0.02 creates/s, 0.80 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 109854, unzip_LRU len: 190
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 1
Buffer pool size 131071
Free buffers 20192
Database pages 110633
Old database pages 40859
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 21, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 107355, created 3278, written 50788
0.00 reads/s, 0.00 creates/s, 0.48 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 110633, unzip_LRU len: 219
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 2
Buffer pool size 131071
Free buffers 19981
Database pages 110840
Old database pages 40935
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 11, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 107052, created 3788, written 65778
0.00 reads/s, 0.00 creates/s, 0.48 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 110840, unzip_LRU len: 223
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 3
Buffer pool size 131071
Free buffers 18616
Database pages 112208
Old database pages 41440
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 17, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 108448, created 3760, written 48754
0.00 reads/s, 0.00 creates/s, 0.27 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 112208, unzip_LRU len: 220
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 4
Buffer pool size 131071
Free buffers 23980
Database pages 106849
Old database pages 39461
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 9, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 103190, created 3659, written 63331
0.00 reads/s, 0.02 creates/s, 0.70 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 106849, unzip_LRU len: 217
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 5
Buffer pool size 131071
Free buffers 19814
Database pages 111069
Old database pages 41020
Modified db pages 0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 14, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 106936, created 4133, written 85900
0.00 reads/s, 0.00 creates/s, 0.61 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 111069, unzip_LRU len: 162
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 6
Buffer pool size 131071
Free buffers 18889
Database pages 112005
Old database pages 41340
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 5, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 108175, created 3830, written 83143
0.00 reads/s, 0.00 creates/s, 0.73 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 112005, unzip_LRU len: 149
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
---BUFFER POOL 7
Buffer pool size 131071
Free buffers 19352
Database pages 111534
Old database pages 41189
Modified db pages 1
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 11, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 107999, created 3535, written 57687
0.00 reads/s, 0.00 creates/s, 0.41 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 111534, unzip_LRU len: 158
I/O sum[0]:cur[0], unzip sum[0]:cur[0]

Perceba que cada instância tem o seu próprio controle de LRU, páginas jovens e velhas, assim como aquelas que se tornaram jovens por serem mais requisitadas e aquelas que se tornaram velhas por serem pouco requisitadas. Quantidade de páginas e quantidade de buffers livres podem também ser observados. Interessante notar que esta seção somente estará presente na saída do SHOW ENGINE INNODB STATUS caso innodb_buffer_pool_instances for maior que zero.

O mais interessante é, para que o InnoDB Buffer Pool funcione bem, particionado ou não, os dados precisam estar lá e para que você, manualmente efetue um “preload” dos dados no buffer, rode esta consulta e depois rode os comandos que ela gerar: 

SELECT
CONCAT('SELECT ',MIN(c.COLUMN_NAME),' FROM ',c.TABLE_NAME,' WHERE ',MIN(c.COLUMN_NAME),' IS NOT NULL')
FROM
information_schema.COLUMNS AS c
LEFT JOIN (
SELECT DISTINCT
TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME
FROM
information_schema.KEY_COLUMN_USAGE
) AS k
USING
(TABLE_SCHEMA,TABLE_NAME,COLUMN_NAME)
WHERE
c.TABLE_SCHEMA = 'yourDatabase'
AND k.COLUMN_NAME IS NULL
GROUP BY
c.TABLE_NAME

Enquanto roda as consultas finais para carregar os dados no Buffer Pool, você poderá utilizar uma interface gráfica qualquer para checar a diminuição do espaço configurado para innodb_buffer_pool_size ou mesmo, checar as variáveis de status que o MySQL possui para monitorar o InnoDB:

mysql> show status like 'Innodb_buffer_pool%'\G
*************************** 1. row ***************************
Variable_name: Innodb_buffer_pool_pages_data
Value: 1639
*************************** 2. row ***************************
Variable_name: Innodb_buffer_pool_pages_dirty
Value: 0
*************************** 3. row ***************************
Variable_name: Innodb_buffer_pool_pages_flushed
Value: 2352
*************************** 4. row ***************************
Variable_name: Innodb_buffer_pool_pages_free
Value: 1046928
*************************** 5. row ***************************
Variable_name: Innodb_buffer_pool_pages_misc
Value: 1
*************************** 6. row ***************************
Variable_name: Innodb_buffer_pool_pages_total
Value: 1048568

 Observe o valor de *Innodb_buffer_pool_pages_free* diminuindo. Isso mostrará que o preload dos dados está realmente funcionando.
Até a próxima.

 

AddThis Social Bookmark Button

Problemas de escala de usuários com o MySQL

dezembro 17th, 2011 Bianchi Posted in MySQL Manutenção, MySQL Tuning 2 Comments »

Essa semana tive um problema grave em um cliente que precisava escalar o números de conexões simultâneas no MySQL de forma que estas conexões superassem o número de 2000. Vários problemas foram analisados, desde pontos básicos como configurações do próprio servidor de bancos de dados MySQL como alguns pontos relacionados ao Kernel. Somente para situar o leitor em relação ao que foi analisado, no MySQL, temos duas variáveis muito importantes que determinam a quantidade de usuários que podem se conectar ao servidor de bancos de dados e também o números de tais conexões que podem acontecer em um mesmo momento, ou seja, simultâneas.

  1. max_connections – esse é o parâmetro que controla a quantidade de usuários que podem se conectar ao MySQL;
  2. max_user_connections – esse é o parâmetro utilizado para configurar a quantidade de conexões simultâneas que podem acontecer durante o funcionamento do MySQL – segundo o manual, manter o valor desta variável como zero, é o mesmo que dizer que a coisa é ilimitada;

Interessante sabermos que, além das variáveis de ambiente, que são utilizadas para configurar os vários aspectos do MySQL e é com elas que realizamos o tuning, temos também as variáveis de status que cobrem todo o funcionamento do MySQL e é através delas que consultamos o que está acontecendo com o servidor de bancos de dados. Com as questões relacionadas com usuários não é diferente, veja só:

mysql> show status like '%conn%';
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| Aborted_connects         |    0  |
| Connections              | 1387  |
| Max_used_connections     |  645  |
| Ssl_client_connects      |    0  |
| Ssl_connect_renegotiates |    0  |
| Ssl_finished_connects    |    0  |
| Threads_connected        |  581  |
+--------------------------+-------+

Sem pensar nas variáveis SSL que foram retornadas na consulta acima, temos três variáveis bastante importantes: Aborted_connects, Connections, Max_used_connections. Cada uma delas tem um significado ligado diretamente às conexões de clientes/usuários com o servidor de bancos de dados.

  1. Aborted_Connects: se o número desta variável de status estiver alto você poderá estar perdendo conexões por quebra das mesmas, sua aplicação não está chamando um método de de “connection_close” antes de fechar a conexão ou mesmo, seu MySQL está evitando consultas;
  2. Connections: é número total de conexões que já acontecerão desde o último restart;
  3. Max_used_connections: é o número de conexões simultâneas que acontecerão desde  o último restart.

Sendo assim, já temos um norte para trabalhar questões de escala de conexões com o MySQL. Já ouvi dizer sobre escalonador de threads e parâmetros de kernel do Linux, mas, a coisa pode ser mais simples que isso. As configirações atuais de um servidor que tenho monitorado são as seguintes:

mysql> show variables like 'max_connections';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| max_connections | 6000  |
+-----------------+-------+
1 row in set (0.00 sec)

mysql> show variables like 'max_user%';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| max_user_connections | 0     |
+----------------------+-------+
1 row in set (0.00 sec)

Com base nisso, passei a observar que quando as conexões atingiam o número de 1000 acontecendo de forma simultânea, ví que o valor da variável de status Aborted_Connects iniciava a aumentar freneticamente e  quando eu tentava acessar o MySQL via mysql client com qualuqer usuário, o seguinte mensagem de erro era enviada:

Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug

Ou seja, ou você está rodando o servidor de bancos de dados configurado com um valor de memória além daquele que o servidor tem disponível ou existe um bug no sistema operacional. Por eliminação, o servidor aonde roda este MySQL tem 64GB e está com 16GB sobrando. Então, esse problema tem havir com algo no sistema operacional. Pesquisando no internet, puder ver que outros amigos tiveram um cenário parecido e também criaram um blog sobre o assunto, como fez o amigo Dimitri em http://bit.ly/trVqL4.

Seguindo mais ou menos o que ele relatou nesse seu blog, eu tinha os memos parâmetros de ulimit para o usuário mysql (su – mysql), mas tinha um valor diferente para threads-max, um valor muito inferior ao mostrado por ele no blog, que é 2065067. Então foi assim que procedi:

  1. Configurei a qtd máxima de threads: echo “2065067” > /proc/sys/kernel/threads-max
  2. Configurei o arquivo “limits.conf” para as sessões dos usuários mysql e root:

mysql soft nofile 10240
mysql hard nofile 40960
mysql soft nproc 10240
mysql hard nproc 40960
root soft nofile 10240
root hard nofile 40960
root soft nproc 10240
root hard nproc 40960

A configuração número dois me pareceu muito familiar e foi bem aceita, pois, isso já é realizado quando se instala o Oracle Database. Após feito isso, foram realizados vários um testes de stress com o mysqlslap, biblioteca de benchmark do próprio MySQL e o problema persistia. Vários binários foram testados para verificar questões de escala entre uma versão e outra:

MySQL Oracle 5.5.17

mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 1135 Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug
mysqlslap: Error when connecting to server: 1135 Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug
mysqlslap: Error when connecting to server: 1135 Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 1135 Can't create a new thread (errno 11); if you are not out of available memory, you can consult the manual for a possible OS-dependent bug
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
Benchmark
Average number of seconds to run all queries: 4.117 seconds
Minimum number of seconds to run all queries: 4.117 seconds
Maximum number of seconds to run all queries: 4.117 seconds
Number of clients running queries: 1200
Average number of queries per client: 0

MySQL Oracle 5.0.92

mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
Benchmark
Average number of seconds to run all queries: 3.049 seconds
Minimum number of seconds to run all queries: 3.049 seconds
Maximum number of seconds to run all queries: 3.049 seconds
Number of clients running queries: 1200
Average number of queries per client: 0

Percona Server 5.5.17

mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
mysqlslap: Error when connecting to server: 2001 Can't create UNIX socket (24)
Benchmark
Average number of seconds to run all queries: 4.137 seconds
Minimum number of seconds to run all queries: 4.137 seconds
Maximum number of seconds to run all queries: 4.137 seconds
Number of clients running queries: 1200
Average number of queries per client: 0

Os testes acima foram realizados em uma mesma máquina com um arquivo de configuração padrão, somente com o valor de max_connections=6000 e max_user_connections=o.

[root@server mysql-coms]# my_print_defaults mysqld
--skip-external-locking
--port=3306
--socket=/var/lib/mysql/mysql.sock
--max_connections=6000
--max_user_connections=0

Uma saída lógica foi checar de mais de perto o erro “Error when connecting to server: 2001 Can’t create UNIX socket (24)” que poderia estar limitando a criação de mais threads, e por consequência mais usuários, no sistema operacional. Foi então que achei o MySQL Dojo aonde já haviam testado esses problemas e se baseavam no ulimit. Resumindo, explorar os valores configurados no ulimit, ou melhor, no arquivo limits.conf e aumentá-los até que os testes fossem satisfatórios. Então foi assim:

[root@server mysql-rpm]# ulimit -a mysql
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 192031
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 90000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 90000
cpu time (seconds, -t) unlimited
max user processes (-u) 90000
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

[root@server mysql-coms]# mysqlslap --user=root --auto-generate-sql --concurrency=1200 --number-of-queries=1
Benchmark
Average number of seconds to run all queries: 5.775 seconds
Minimum number of seconds to run all queries: 5.775 seconds
Maximum number of seconds to run all queries: 5.775 seconds
Number of clients running queries: 1200
Average number of queries per client: 0

E para provar que o MySQL está limitado somente pelo hardware ou ainda, neste caso, também pelas configurações do sistema operacional…

[root@server mysql-coms]# mysqlslap --user=root --auto-generate-sql --concurrency=2000 --number-of-queries=1
Benchmark
Average number of seconds to run all queries: 18.367 seconds
Minimum number of seconds to run all queries: 18.367 seconds
Maximum number of seconds to run all queries: 18.367 seconds
Number of clients running queries: 2000
Average number of queries per client: 0

[root@server mysql-coms]# mysqlslap --user=root --auto-generate-sql --concurrency=3000 --number-of-queries=1
Benchmark
Average number of seconds to run all queries: 41.411 seconds
Minimum number of seconds to run all queries: 41.411 seconds
Maximum number of seconds to run all queries: 41.411 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

E assim, finalizo mais uma aventura com o MySQL e com missão cumprida! Até breve…

 

AddThis Social Bookmark Button

Cache de Threads – thread_cache_size

novembro 28th, 2011 Bianchi Posted in MySQL Tuning No Comments »

Um dos pontos mais críticos para o workload do MySQL é a criação contínua de threads e sabendo-se que a cada conexão que uma aplicação ou um cliente qualquer realiza com o MySQL, uma nova *thread* é criada – imaginem um servidor com essa quantidade de requisições:

mysql> \s
--------------
mysql Ver 14.14 Distrib 5.5.17, for Linux (x86_64)

Connection id: 100407
Current database:
Current user: root@localhost
SSL: Not in use
Current pager: stdout
Using outfile: ''
Using delimiter: ;
Server version: 5.5.17-log MySQL Community Server (GPL)
Protocol version: 10
Connection: Localhost via UNIX socket
Server characterset: latin1
Db characterset: latin1
Client characterset: utf8
Conn. characterset: utf8
UNIX socket: /var/lib/mysql/mysql.sock
Uptime: 8 days 17 hours 49 min 6 sec

Threads: 696 Questions: 292951068 Slow queries: 225 Opens: 498354
Flush tables: 1 Open tables: 256 Queries per second avg: 387.836
--------------

A saída do comando \s ou status acima nos mostra que temos 696 threads atualmente conectadas (ativas ou em sleep). O mais interessante é saber que através da variável thread_cache_size nós podemos solicitar que tais threads já criadas sejam limpas após uma desconexão e após isso, serem armazenadas em cache para reutilização. Assim, o MySQL não precisam mais criar novas threads a todo momento que uma nova conexão é requisitada. No exemplo abaixo, no mesmo servidor, aonde o número de conexões simultâneas batem em quase 1000, deixei o valor padrão de thread_cache_size configurado como 8, sendo um valor bem baixo para a demanda atual.

mysql> show variables like 'thread_cache%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| thread_cache_size | 8     |
+-------------------+-------+
1 row in set (0.00 sec)

Mais uma vez, através das variáveis de status, podemos checar que o MySQL reutiliza o objeto thread para novas conexões:

mysql> show status like 'Thread%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Threads_cached    | 7     |
| Threads_connected | 799   |
| Threads_created   | 90435 |
| Threads_running   | 1     |
+-------------------+-------+
4 rows in set (0.00 sec)

mysql> show status like 'Thread%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Threads_cached    | 6     |
| Threads_connected | 799   |
| Threads_created   | 90435 |
| Threads_running   | 2     |
+-------------------+-------+
4 rows in set (0.00 sec)

Nos dois resultados acima, podemos ver que:

  • o número de threads em cache são 7, apontados por Threads_cached,
  • o número de threads conectadas é 799, o que mostra Threads_connected,
  • o número de threads já criadas desde a última reinicialização é de 90435,
  • o número de threads que atualmente tem o status diferente de Sleep, Threads_running.
Uma boa leitura para este cenário é, o número de Threads_cached diminuiu, já que uma das 7 threads que estavam em cache foi utilizada para uma nova conexão que agora está em um estado diferente de sleep (visto pelo SHOW PROCESSLIST). E o que mostra a otimização é justamente a reutilização da thread que estava em cache e a não crição de uma nova, já que o número de Threads_created não foi alterado. Você poderá otimizar o número de threads que você deseja armazenar no cache de threads do MySQL, área que é controlada pela variável thread_cache_size, através do arquivo de configuração do MySQL e setando um número próximo ao número de threads já criadas, apotando por Threads_created.
[mysqld]
thread_cache_size = 1000
Existe um problema em relação ao valor de thread_cache_size ser maior que 14 em versões anteriores à versão 5.5 do MySQL. Tenho alguns servidores de bancos de dados MySQL na versão 5.5++ em alguns clientes utilizando valores bem superiores, sem nenhum problema algum. O cache de threads poderá reduzir a pressão sobre o SWAP e o load de CPU, auxiliando o engine do MySQL a ocupar os recurso de máquina mais com outros problemas como a entrega de dados, por exemplo.
mysql> show status like 'Threads%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| Threads_cached    | 273   |
| Threads_connected | 727   |
| Threads_created   | 4659  |
| Threads_running   | 101   |
+-------------------+-------+
4 rows in set (0.00 sec)
Até…
AddThis Social Bookmark Button