WagnerBianchi.com

MariaDB 10.3 PL/SQL I

outubro 17th, 2017 admin Posted in MySQL A&D, MySQL New Features 3 Comments »

Since MariaDB Corporation started with the project to bring the PL/SQL and other Oracle based functionalities to MariaDB Sever, I’ve got very curious and started having a looking around to see how was that going. Most of the time, what calls the attention of database and system administrators, or Database Operations guys is really the new things that pops up times to times and makes it worst checking and testing. At this time, it was not any different, as I worked with PL/SQL before, I felt that it is a giant step for MariaDB users to put together SQL script in more structured way. I am not saying that the standards supported natively by MariaDB is not good, I am just saying that it could be good to innovate and get new procedural scripts writing on SQL to use the PL/SQL features.

The first step is really get a server of your preference running MariaDB 10.3 ++ in order to get the features announced here https://mariadb.com/kb/en/library/mariadb-1030-release-notes/ .For the time I’m writing this blog post, 10.3.1 is the latest release of the MariaDB 10.3 series, which has the following changelog, https://mariadb.com/kb/en/library/mariadb-1031-release-notes/. Regardless of the InnoDB 5.7.19 being added to MariaDB removing the XtraDB Storage Engine e many other new things, this blog post will be dedicated mainly to write code in PL/SQL, observing all the features that was released until now on MariaDB 10.3.1.

After getting the MariaDB Server 10.3.1++ running on your system, I’m running that on a CentOS 7.3, just for the records, you need to add the initial configurations to make the MariaDB Server (you must pay attention to the version, it’s important) to understand the PL/SQL and that’s done configuring the global variable @@sql_mode with oracle as its value, as you can see below:

1
2
3
#: /etc/my.cnf.d/server.cnf | grep sql_mode
[mysqld]
sql_mode=oracle

You may get in doubt where you place the the sql_mode configs, so you can follow what is being shown above. Edit the file /etc/my.cnf.d/server.cnf and add the sql_mode=oracle under the [mysqld] section, it’s gonna work for you as worked for me. An alternative you have is to set globally the @@sql_mode value on the runtime, so, you can have it until the next start.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
MariaDB [(none)]> SELECT @@sql_mode;
+------------+
| @@sql_mode |
+------------+
|            |
+------------+
1 row in set (0.001 sec)
 
MariaDB [(none)]> SET GLOBAL sql_mode=ORACLE;
Query OK, 0 rows affected (0.000 sec)
 
MariaDB [(none)]> SELECT @@sql_mode\G
*************************** 1. row ***************************
@@sql_mode: PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,ORACLE,NO_KEY_OPTIONS,
            NO_TABLE_OPTIONS,NO_FIELD_OPTIONS,NO_AUTO_CREATE_USER
1 row in set (0.000 sec)

By the way, it’s important to add the configs to yours MariaDB Server’s configuration file to avoid for PL/SQL programs to stop working after a restart. You need to be aware that, when you enable the @@sql_mode as ORACLE to be able to use the PL/SQL on MariaDB 10.3, the native MySQL’s syntax for creating routines which adheres is fairly closely to the SQL:2003 standard won’t be available anymore – any attempts to create a native SQL procedure is gonna fail with a problem on recognizing syntax.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
MariaDB [mydb]> delimiter /
MariaDB [mydb]> create procedure p1 (a int)
    -> begin
    ->     declare var int default 0;
    ->     while a > var do
    ->         insert into t1 set i=var;
    ->         set var = var +1;
    ->     end while;
    -> end;
    -> /
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'begin
    declare var int default 0;
    while a > var do
        insert into t1' at line 3
Query OK, 0 rows affected (0.000 sec)

As we are running MariaDB 10.3 with @@sql_mode as oracle, I’m going to create a table and will create in a sequence a simple PL/SQL procedure that will prepares (parses) and immediately executes the defined dynamic SQL statement.

The created table:

1
2
3
4
5
6
7
8
9
10
MariaDB [mydb]> show create table t1/
+-------+-------------------------------------------------------------------+
| Table | Create Table                                                      |
+-------+-------------------------------------------------------------------+
| t1    | CREATE TABLE "t1" (
  "i" int(11) NOT NULL,
  PRIMARY KEY ("i")
) |
+-------+-------------------------------------------------------------------+
1 row in set (0.000 sec)

The PL/SQL simple procedure with a static value to be inserted on table mydb.t1:

1
2
3
4
5
6
7
8
9
MariaDB [mydb]> delimiter /
MariaDB [mydb]> create or replace procedure p1
    -> as
    ->     a mydb.t1.i%TYPE := 1;
    -> begin
    ->     execute immediate 'insert into mydb.t1 (i) values (:a)' USING a;
    -> end;
    -> /
Query OK, 0 rows affected (0.005 sec)

Now we can call the procedure p1:

1
2
3
4
5
6
7
8
9
10
MariaDB [mydb]> call mydb.p1/
Query OK, 1 row affected (0.002 sec)
 
MariaDB [mydb]> select * from t1/
+---+
| i |
+---+
| 1 |
+---+
1 row in set (0.000 sec)

But most of the time you want to pass a parameters with the value to be inserted on a table and even, being worked by the procedure and then, inserted on tables. For the sake of simplicity, I’m going to pass one parameter to the procedure and then, insert it to the table t1.

Creating the procedure p2:

1
2
3
4
5
6
7
8
9
MariaDB [mydb]> delimiter /
MariaDB [mydb]> create or replace procedure p2 (i int)
    -> as
    ->     a mydb.t1.i%TYPE := i;
    -> begin
    ->     execute immediate 'insert into mydb.t1 (i) values (:a)' USING a;
    -> end;
    -> /
Query OK, 0 rows affected (0.002 sec)

Calling the procedure p2:

1
2
3
4
5
6
7
8
9
10
11
MariaDB [mydb]> call p2(100)/
Query OK, 1 row affected (0.003 sec)
 
MariaDB [mydb]> select * from t1/
+-----+
| i   |
+-----+
|   1 |
| 100 |
+-----+
2 rows in set (0.000 sec)

Conclusion

This small article show you how to configure MariaDB 10.3, already available for download at MariaDB website, to be configured with @@sql_mode as oracle as it permits you to create procedures. functions, triggers, packages and etc using PL/SQL, the Oracle’s procedural language. On a next blog, I’m going to bring up more sophisticated constructions for the support MariaDB has to PL/SQL.

AddThis Social Bookmark Button

Working with MySQL on SSD

novembro 6th, 2014 admin Posted in MySQL A&D, MySQL Tuning No Comments »

I’d like to start this post or entry registering that even SSD cards or disks provides very low latency and faster random reads/writes, I consider that it’s new to MySQLers and at least on MySQL World. New based on the information we can find on the internet in form of collaboration to make it to run maybe “like a charm” and by the presentations we’ve been seeing more and more on the last months. Things like SLC and MLC have been better explained now than before and what I’ve seen is that MySQL Team has collaborate a lot with all these *new* things to make the software of MySQL to scale more in terms of I/O usage, delivering better results in terms simple SELECT queries to heavy ALTER TABLE. What I’ve waited when SSD come into a plan of a special customer I’m working with in Brazil is that all the queries could perform better just by being over faster disks – this is not true. Many tests have been done where I’m using sysbench 0.5 and more than table as discussed on twitter with @cpeintre, @morgo and @stoker. Sysbench results will be on focus soon in this post.

Presenting, Dell Compellent SC8000, the storage!

My first desire was to have Fusion IO cards to run some MySQL’s files on that, to make it easier as the market is doing that since some time ago. I’ve seen for years many guys speaking about those flash cards delivering lots of IOPS and making MySQL run faster. BTW, when our Dell’s contact presented the Dell Compellent SC8000, we saw the possibility to expand the IT environment towards a more professional environment and due to the possibility to scale the hardware in case we need to provide more space on our database layer. This storage, aka “external storage” represents a large investment and a giant step in terms of environment professionalism and was thought like a something that will provide all the needed IOPS and speed we need to solve problems of queries to feed reports to replication lags that happens with no sense (we can go into details of it later on another post). Detailing so the storage, it has an intelligence to always write on SLC flash disks organized in RAID 10 (tier 1), always read from MLC flash disks organized in RAID 5 (tier 2) and not accessed data/pages are moved *initially* in 12 days to the 10k mechanic disks in RAID 6 which is the tier 3.

Additionally, the tier 2 is the hot area where all the more accessed data resides. When a data is inserted into the database, it’s moved to tier 2 in background and if not accessed, moved away to mechanical disks, less privileged area. It seems to me that internally this storage has a kind of hash table with all the pages contained on the hot area, that is, tier 2, and from times to times it is going to move the less accessed pages. In case of tier 2 gets full, less accessed pages will be moved to tier 3 before the 12th day. Basically, once can choose a profile to create a new LUN. This profile can be comprised by tier 1 only, tier 2 only, tier 3 only or any combination among them. The connectivity among storage and servers is done by a dedicated fiber channel network, using a HBA of 8GB Dual Port (round-robin).

Nice…it’s flexible. More here…

Test with FIO

Theoretically, all those things seemed OK and we went for a test for FIO. The test went very well, and it’s just create a LUN with a profile such as “automatic”, containing all the existent tiers and mount it on a Linux machine, which is Red Hat 6.5. After writing a configuration file to be read by FIO simulating what MySQL does on our environment, it was executed on both scenarios: (NDB2) our server running all MySQL files on HDD, (NDB3) on the other server running MySQL files on SSD. The FIO script is that below:

[random-writes]
; this test was written by Bianchi
; me at wagnerbianchi.com
runtime=1800 # 30mins
size=2G
threads=1
numjobs=16
ioengine=libaio
iodepth=32
bs=5k
; innodb related stuff
lockfile=readwrite # simulating row-locking
rw=randwrite       # writing in pages randomly
direct=0           # O_DSYNC
sync=0             # O_DSYNC
refill_buffers     # Buffer Pool load pages
openfiles=1000

My intention configuring direct=0 and sync=0 was to perform the same thing we have on our current production environment, deliver all the writes to a battery backed cache and get back. The test results:

Server Job # IO (MB) IO (Qtd) TIME (ms)
NDB2     1   1965.4       368   1091316
NDB2     2   2047.2       498    841042
NDB2     3   2047.2       380   1103541
NDB2     4   1704.3       443    787271
NDB2     5   2047.2       471    889231
NDB2     6   2015.6       434    951029
NDB2     7   2047.2       411   1020253
NDB2     8   2047.2       387   1081822
NDB2     9   2047.2       481    870714
NDB2    10   2011.1       549    749626
NDB2    11   1633.6       740    452040
NDB2    12   2047.2       488    858940
NDB2    13   2047.2       378   1107883
NDB2    14   1945.6       602    661052
NDB2    15   2047.2       585    716770
NDB2    16   2000.9       601    680994

NDB2-FIO

Server  Job # IO (MB) IO (Qtd) TIME (ms)
STORAGE     1  1965.4     2115    190270
STORAGE     2  2047.2     2925    143387
STORAGE     3  2047.2     3212    130562
STORAGE     4  1704.3     2910    119915
STORAGE     5  2047.2     3010    139334
STORAGE     6  2015.6     2138    193032
STORAGE     7  2047.2     3073    136465
STORAGE     8  2047.2     2791    150233
STORAGE     9  2047.2     2415    173628
STORAGE    10  2011.1     3027    136085
STORAGE    11  1633.6     2186    153012
STORAGE    12  2047.2     2700    155319
STORAGE    13  2047.2     2779    150917
STORAGE    14  1945.6     2886    138059
STORAGE    15  2047.2     2785    150573
STORAGE    16  2000.9     2865    142991

STG-FIO

While IOPS are a way more on SSD, latency behavior is a way less. Next step was to setup everything and get the storage working inside our main DC and mount a LUN on some server to carry on with tests. The first sysbench I ran was was using Percona Server 5.5.37.1 and even configuring innodb_adaptive_flush_method as keep_average, neighbor pages as area and changing the redo logs block size to 4096, MySQL wasn’t able to use all the I/O we were waiting. it was a time that, speaking with @morgo, the version upgrade come into the scene and I went for it. The only barrier I had upgrading the 5.5 to 5.6 was the question around temporal data types we discussed with some folks on the Official MySQL Forum. Even having the replication between 5.5.37 (master prod) and 5.6.21 (new comer slave with SSD) running well for more than 10 hours, I decided to apply the solution proposed by Johnaton Coombes. It’s running well until now…

Sysbench’ing

After seeing that the storage really deliver what we’re looking for to check what’s the best configuration to put MySQL to run on SSD. After reading the Matsunobu entry on his blog, I rearranged everything considering sequential written files on HDD and just tables and the shared tablespace on SSD (however it’s possible to put undo files on SSD and all other things of ibdata1 on HDD). That gave me new numbers and replication gained more throughput having relay logs accompanied by redo logs, error log and slow query logs. Thanks for @cpeintre to give a hint to have more than one table to sysbench situations and for @lefred to host sysbench rpm package on his blog (it’s nice).

innodb_flush_method and innodb_flush_logs_at_trx_commit

At this time I’ve started some tests to consider the best combination of some important parameters to better handle the InnoDB workflow. On my current environment using mechanic disks, I’ve configured mysqld to use more and more memory and file system cache, taking into account that my underlying hardware relies on some disk controllers with battery backed cache of 512MB – this permits my whole system to deliver almost 9.300K IOPS using RAID 1. My intention here is to test innodb_flush_logs_at_trx_commit as 1 when flush_method is O_DIRECT and innodb_flush_logs_at_trx_commit {0|2} when flush_method is O_DSYNC – I’d like to remember that I’m using MySQL Oracle.

Considering that O_DSYNC and flush_method as 0|2 had the same results…

Screenshot 2014-11-06 11.34.33

Let’s benchmark it so.

--innodb_io_capacity=2000
--innodb_lru_scan_depth=2500
--innodb_flush_logs_at_trx_commit=1
--innodb_flush_method=O_DIRECT
Screenshot 2014-11-06 11.18.12
 Screenshot 2014-11-06 11.18.58
--innodb_io_capacity=2000
--innodb_lru_can_depth=2500
--innodb_flush_logs_at_trx_commit=0
--innodb_flush_method=O_DSYNC
Screenshot 2014-11-06 11.22.47
Screenshot 2014-11-07 10.53.08


The final summary was:

Screenshot 2014-11-06 11.42.01
 Screenshot 2014-11-06 11.46.45

innodb_io_capacity and innodb_lru_scan_depth

After reading the blog entry written by Mark Callaghan on 2013 about these both system variables, I decided to have the value on both as a start point. As it’s well explained by Mark on his blog entry and using twitter, as here, both variables will give mysqld more IOPS if there are more resources like that on the system. So I went form 1000 to 3000 to make it reasonable for io_capacity and did the same for lru_scan.

#: Sysbench line used here:
[bianchi@ndb2 db]$ sudo sysbench --test=oltp.lua --oltp-table-size=1000000 --mysql-db=test --oltp-tables-count=10 --mysql-user=bianchi --db-driver=mysql --mysql-table-engine=innodb --max-time=300 --max-requests=0 --report-interval=60 --num-threads=500 --mysql-socket=/var/mysql/logs/mysql.sock --mysql-engine-trx=yes run

1-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 1000 |                    1000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2895.09, reads/s: 43241.46, writes/s: 11824.06, response time: 1278.56ms (95%)
[ 120s] threads: 500, tps: 2919.87, reads/s: 43432.81, writes/s: 11914.27, response time: 1387.02ms (95%)
[ 180s] threads: 500, tps: 2911.20, reads/s: 43266.95, writes/s: 11875.58, response time: 1397.43ms (95%)
[ 240s] threads: 500, tps: 2896.17, reads/s: 43039.52, writes/s: 11812.63, response time: 1385.36ms (95%)
[ 300s] threads: 500, tps: 2881.70, reads/s: 42842.40, writes/s: 11756.67, response time: 1382.87ms (95%)

2-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    2000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2834.36, reads/s: 42276.71, writes/s: 11570.30, response time: 1293.57ms (95%)
[ 120s] threads: 500, tps: 2964.74, reads/s: 44071.70, writes/s: 12094.58, response time: 1383.70ms (95%)
[ 180s] threads: 500, tps: 2943.48, reads/s: 43790.31, writes/s: 12011.63, response time: 1380.39ms (95%)
[ 240s] threads: 500, tps: 2940.23, reads/s: 43772.47, writes/s: 12002.10, response time: 1381.63ms (95%)
[ 300s] threads: 500, tps: 2961.58, reads/s: 44007.70, writes/s: 12079.94, response time: 1376.67ms (95%)

3-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    4000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2835.78, reads/s: 42283.84, writes/s: 11577.04, response time: 1287.78ms (95%)
[ 120s] threads: 500, tps: 2866.35, reads/s: 42659.13, writes/s: 11697.75, response time: 1418.51ms (95%)
[ 180s] threads: 500, tps: 2901.80, reads/s: 43129.23, writes/s: 11834.54, response time: 1383.28ms (95%)
[ 240s] threads: 500, tps: 2924.12, reads/s: 43527.28, writes/s: 11934.51, response time: 1394.09ms (95%)
[ 300s] threads: 500, tps: 2928.04, reads/s: 43537.30, writes/s: 11946.43, response time: 1390.76ms (95%)

4-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    3000 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2915.01, reads/s: 43438.88, writes/s: 11896.84, response time: 1276.65ms (95%)
[ 120s] threads: 500, tps: 3003.12, reads/s: 44634.98, writes/s: 12248.90, response time: 1345.71ms (95%)
[ 180s] threads: 500, tps: 2983.62, reads/s: 44394.64, writes/s: 12174.23, response time: 1372.15ms (95%)
[ 240s] threads: 500, tps: 2971.40, reads/s: 44181.10, writes/s: 12122.10, response time: 1361.10ms (95%)
[ 300s] threads: 500, tps: 2976.20, reads/s: 44241.53, writes/s: 12140.61, response time: 1360.70ms (95%)

5-) "select @@innodb_io_capacity, @@innodb_lru_scan_depth, @@innodb_buffer_pool_instances;"
+----------------------+-------------------------+--------------------------------+
| @@innodb_io_capacity | @@innodb_lru_scan_depth | @@innodb_buffer_pool_instances |
+----------------------+-------------------------+--------------------------------+
|                 2000 |                    2500 |                              8 |
+----------------------+-------------------------+--------------------------------+
[  60s] threads: 500, tps: 2915.46, reads/s: 43605.14, writes/s: 11914.68, response time: 1207.51ms (95%)
[ 120s] threads: 500, tps: 2993.02, reads/s: 44541.72, writes/s: 12214.99, response time: 1358.26ms (95%)
[ 180s] threads: 500, tps: 3004.48, reads/s: 44628.71, writes/s: 12254.80, response time: 1346.52ms (95%)
[ 240s] threads: 500, tps: 3014.33, reads/s: 44839.96, writes/s: 12298.70, response time: 1366.41ms (95%)
[ 300s] threads: 500, tps: 2974.83, reads/s: 44291.42, writes/s: 12142.27, response time: 1357.04ms (95%)

Summarizing the above collected facts, in terms of…

Response Times

Screenshot 2014-11-06 15.22.32

TPS

Screenshot 2014-11-06 15.26.02

Reads/Writes

Screenshot 2014-11-06 15.25.47

innodb_log_buffer_size

This was configured used a large value and it was annoying me  a little. After fiding the Shlomi Noach blog entry with a good query to check the size of transactions that populate the log buffer, its seems very important to have in place a more accurate configuration.

ndb2 mysql> SELECT
    ->   innodb_os_log_written_per_minute*60
    ->     AS estimated_innodb_os_log_written_per_hour,
    ->   CONCAT(ROUND(innodb_os_log_written_per_minute*60/1024/1024, 1), 'MB')
    ->     AS estimated_innodb_os_log_written_per_hour_mb
    -> FROM
    ->   (SELECT SUM(value) AS innodb_os_log_written_per_minute FROM (
    ->     SELECT -VARIABLE_VALUE AS value
    ->       FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    ->       WHERE VARIABLE_NAME = 'innodb_os_log_written'
    ->     UNION ALL
    ->     SELECT SLEEP(60)
    ->       FROM DUAL
    ->     UNION ALL
    ->     SELECT VARIABLE_VALUE
    ->       FROM INFORMATION_SCHEMA.GLOBAL_STATUS
    ->       WHERE VARIABLE_NAME = 'innodb_os_log_written'
    ->   ) s1
    -> ) s2
    -> \G
*************************** 1. row ***************************
   estimated_innodb_os_log_written_per_hour: 1008476160
estimated_innodb_os_log_written_per_hour_mb: 961.8MB
1 row in set (59.99 sec)

ndb2 mysql> SELECT (961.8/60)\G
*************************** 1. row ***************************
(961.8/60): 16.03000
1 row in set (0.00 sec)

Operating System Demanding Tuning

All machine servers planned to be placed upon the storage runs the Red Hat 6.5. After updating the operating systems packages we followed the recommendations of this paper released by Oracle , differing just the scheduler/elevator which we decided to use [NOOP]. In the midst of the configuration path that has run for some days, we had a case when we forgot to make this below configuration and we had a chance to see the the performance has improved by 30%, considering replication lagging and query executions for reads and writes. As the storage attached/mounted on file system is represented by an alias or device mapper (appears as dm-X) for all the underlying disks, it’s possible to configure just the device mappers in order to make all these things to work properly with NOOP.

$ echo 10000 > /sys/block/sdb/queue/nr_requests
$ echo 1024 > /sys/block/sdb/queue/max_sectors_kb
$ echo 0 > /sys/block/sdb/queue/rotational
$ echo 0 > /sys/block/sdb/queue/add_random
$ echo 0 > /sys/block/sdb/queue/rq_affinity

You can check the meaning of each configuration here on the Red Hat’s Knowledge Base. Additionally, it was very interesting to place all the above configuration on /etc/rc.local.

Replication Lagging Problems

OK, the scenario #5 is the best at all. So, my feelings are that in some point all those benchmarks done with sysbench lied completely! When I configured my server on SSD and get it replicating, catching up on master data, the lagging wasn’t decreased after an hour. Instead, the lagging increased and the slave server was getting far and far from master, almost fading away on the road. Thinking about the configuration I’ve got on my stablished environment, I decided to set it up as O_DSYNC, relying on the file system cache and storage controller battery backed cache (64GB), configuring innodb_flush_logs_at_trx_commit as 0 as well. Things started getting a little bit more faster since the lagging was stopped on the same number of Seconds_Behind_Master. Ok, I made this fucking thing to decrease when I tuned well the innodb_log_buffer_size as I told on some sections above and then replication lags disappeared, being this new server the only server that stays always behind the red state of lagging which is 50 seconds (our company threshold). First of all I configured log_buffer as 8M, but, checking properly status variables, I saw many pending syncs accumulating there. Jumped to 32M and now everything is OK. Next step as I’m running 5.6 now is to jump into this party and start using PERFORMANCE_SCHEMA and other smart things to monitor the environment in order to increase throughput and get even less response time.

By the way, until this point, I’ve run MySQL 5.6.21 with this below configuration file and Linux adjusts previously mentioned:

--innodb_io_capacity=2000
--innodb_io_capacity_max=2500
--innodb_lru_scan_depth=2500
--innodb_flush_log_at_trx_commit=0
--innodb_flush_neighbors=0
--innodb_log_group_home_dir=/var/mysql/logs
--innodb_log_files_in_group=2
--innodb_log_file_size=1024M
--innodb_buffer_pool_size=72G
--innodb_doublewrite=0
--innodb_buffer_pool_instances=10
--innodb_log_buffer_size=32M
--innodb_file_per_table=1
--innodb_file_format=BARRACUDA
--innodb_flush_method=O_DSYNC
--innodb_open_files=900000
--innodb_read_io_threads=16
--innodb_write_io_threads=16
--innodb_support_xa=0

The final comment is that when running MySQL on SSD, 5.5 is the worst case and the 5.6 make the kid a little bit better at this moment when the charts related with I/O started getting more colored in Ganglia and in the Enterprise Manager which is the storage’s monitoring center. Speaking about 5.6 yet is good to have attention on two variables which are innodb_lru_scan_depth which will handle all the I/O per Buffer Pool instance and innodb_flush_neighbors which will handle the way flush pages are done on SSD. I believe that soon I’ll have more to post here considering performance tuning related things.

AddThis Social Bookmark Button

Fast Index Creation really matters

julho 23rd, 2014 admin Posted in MySQL A&D, MySQL Tuning No Comments »

In one of the recent projects I’ve got involved, I had a situation where I started reviewing the data model so as to find any additional or unnecessary indexes on tables. The scenario is that one where the database was recently moved from MyISAM to InnoDB Storage Engine. So, considering that there are some critical queries that are hanging inside the InnoDB for a long time, I decided to remove some of the redundant indexes from some tables and then re-validate the queries with less indexes options in terms of optimization. To remove indexes, I had an option to do a simple ALTER TABLE … DROP INDEX and use pt-online-schema-change, this last gives a possibility to keep the operation running within the process of removing indexes. This is not a typical operation if we consider MySQL version prior to the 5.5 (or 5.1 + InnoDB Plugin), taking into account that the very first table thought to be a target of the ALTER TABLE was residing in a MySQL 5.0 (traumatic period) and the same table be one of the biggest tables that exists in the schema – 784GB. Now this table resides in the MySQL 5.5, but the MySQL 5.0’s trauma remains on the team members mind.

All the operation was executed very well in terms of the ALTER TABLE execution, it was faster and painless. Often, folks from the customer side wants to be comfortable with the solution about to be applied to avoid them to stay asleep at nights or weekends, I like that as well due to the audit process implied. Btw, the ALTER TABLE that dropped some indexes was executed on the MASTER server and was replicated to 10 slave servers and everything is running well. Avoid problems in modifying tables on slaves. At least on 5.5, I found a problem that was published at bugs.mysql.com and you can check that here (http://bugs.mysql.com/bug.php?id=60784).

So, all the comments done, the intention of this post is to demo the importance and how faster is to CREATE or DROP a secondary index on InnoDB tables and I wanna compare both versions, 5.0 and 5.5 and as I am planning to migrate all my customers to 5.6, I will provide soon tests considering this last one’s time as well.

Test Scenario

The very first thing to execute on this test to benchmark index creation and exclusion, we need to think of a table with large data and some complicate columns configured with complex data types such as large VARCHAR, TEXT and BLOB. It’s going to give us a complex scenario to deal with indexes on new and old MySQL versions. I would like to caught up your attention that, to add or remove secondary indexes on InnoDB tables from 5.1 + InnoDB Plugin/5.5, a table copy-alter-rename is not needed due to the Fast Index Creation, the opposite of what happens when it’s needed to alter a clustered index column. This is the focus of the tests here and the versions 5.0, 5.1, 5.5, and 5.6 will be part of this small benchmark. I’ve just raised a Vagrant VM with an automating script to setup all the MySQL versions as exhibited below:

[root@mysql56 ~]# find / -name mysqld
/mysql50/bin/mysqld
/mysql56/bin/mysqld
/mysql51/bin/mysqld
/mysql55/bin/mysqld

[root@mysql56 ~]# ls -lh / | grep mysql
drwxr-xr-x  15 mysql   mysql   4.0K May 31 01:12 mysql50
drwxr-xr-x  13 mysql   mysql   4.0K May 31 00:35 mysql51
drwxr-xr-x  13 mysql   mysql   4.0K May 31 01:15 mysql55
drwxr-xr-x  13 mysql   mysql   4.0K May 31 00:16 mysql56

[root@mysql56 bin]# /etc/init.d/mysql50 status
MySQL is not running                                       [FAILED]
[root@mysql56 bin]# /etc/init.d/mysql51 status
MySQL is not running, but PID file exists                  [FAILED]
[root@mysql56 bin]# /etc/init.d/mysql55 status
MySQL is not running, but PID file exists                  [FAILED]
[root@mysql56 bin]# /etc/init.d/mysql56 status
MySQL is not running, but PID file exists                  [FAILED]

[root@mysql56 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                      7.7G  7.3G     0 100% /
/dev/sda1              99M   20M   75M  21% /boot
tmpfs                 502M     0  502M   0% /dev/shm
/dev/sdb1             7.9G  147M  7.4G   2% /mysql50/datadir
/dev/sdc1             7.9G  147M  7.4G   2% /mysql51/datadir
/dev/sdd1             7.9G  147M  7.4G   2% /mysql55/datadir
/dev/sde1             7.9G  147M  7.4G   2% /mysql56/datadir

1. Create a complex test table and a stored procedure to populate the table:

I’m not sure if this below table is complex enough if we compare this with that large table that we can encounter on company’s databases. Generally, when tables are target of INSERTs, DELETEs and UPDATEs, it’s needed to consider some other factors which the main ones are data fragmentation due to the random access to InnoDB pages and a lack of accuracy related with table statistics – both aspects can be solved with OPTIMIZE TABLE. BTW, the time can be close to that one that will be available here on tests. Let’s create the table and the procedure which will load data in the table.

#
#: creating the database and a table to 
#: accommodate data for the tests
#
mysql> CREATE DATABASE wb;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE TABLE wb.tbl01 (
    -> a bigint not null auto_increment primary key,
    -> b varchar(144) not null,
    -> c char(144) not null,
    -> d longblob,
    -> e longtext
    -> ) ENGINE=InnoDB;
Query OK, 0 rows affected (1.68 sec)
#
#: stored procedure to load data in the table
#
mysql> DELIMITER //
mysql> CREATE PROCEDURE wb.proc01(IN p_num BIGINT)
 -> BEGIN
 -> SET @u_var = 0;
 -> WHILE @u_var <= p_num DO
 -> INSERT INTO wb.tbl01
 -> SET a=@u_var,
 -> b=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),1),
 -> c=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),1),
 -> d=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),2),
 -> e=REPEAT(CONCAT(DATE_FORMAT(NOW(),'%d%m%Y%h%m%s'),md5(@u_var)),2);
 -> SET @u_var = @u_var+1;
 -> END WHILE;
 -> END //
Query OK, 0 rows affected (0.00 sec)</pre>

#
#: this is the resultant data after running the procedure above
#
mysql> select * from wb.tbl01 limit 10\G
*************************** 1. row ***************************
a: 1
b: 23072014070734c4ca4238a0b923820dcc509a6f75849b
c: 23072014070734c4ca4238a0b923820dcc509a6f75849b
d: 23072014070734c4ca4238a0b923820dcc509a6f75849b23072014070734c4ca4238a0b923820dcc509a6f75849b
e: 23072014070734c4ca4238a0b923820dcc509a6f75849b23072014070734c4ca4238a0b923820dcc509a6f75849b
*************************** 2. row ***************************
a: 2
b: 23072014070734c81e728d9d4c2f636f067f89cc14862c
c: 23072014070734c81e728d9d4c2f636f067f89cc14862c
d: 23072014070734c81e728d9d4c2f636f067f89cc14862c23072014070734c81e728d9d4c2f636f067f89cc14862c
e: 23072014070734c81e728d9d4c2f636f067f89cc14862c23072014070734c81e728d9d4c2f636f067f89cc14862c
*************************** 3. row ***************************
a: 3
b: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
c: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
d: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf323072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
e: 23072014070734eccbc87e4b5ce2fe28308fd9f2a7baf323072014070734eccbc87e4b5ce2fe28308fd9f2a7baf3
*************************** 4. row ***************************
a: 4
b: 23072014070734a87ff679a2f3e71d9181a67b7542122c
c: 23072014070734a87ff679a2f3e71d9181a67b7542122c
d: 23072014070734a87ff679a2f3e71d9181a67b7542122c23072014070734a87ff679a2f3e71d9181a67b7542122c
e: 23072014070734a87ff679a2f3e71d9181a67b7542122c23072014070734a87ff679a2f3e71d9181a67b7542122c
*************************** 5. row ***************************
a: 5
b: 23072014070734e4da3b7fbbce2345d7772b0674a318d5
c: 23072014070734e4da3b7fbbce2345d7772b0674a318d5
d: 23072014070734e4da3b7fbbce2345d7772b0674a318d523072014070734e4da3b7fbbce2345d7772b0674a318d5
e: 23072014070734e4da3b7fbbce2345d7772b0674a318d523072014070734e4da3b7fbbce2345d7772b0674a318d5
*************************** 6. row ***************************
a: 6
b: 230720140707341679091c5a880faf6fb5e6087eb1b2dc
c: 230720140707341679091c5a880faf6fb5e6087eb1b2dc
d: 230720140707341679091c5a880faf6fb5e6087eb1b2dc230720140707341679091c5a880faf6fb5e6087eb1b2dc
e: 230720140707341679091c5a880faf6fb5e6087eb1b2dc230720140707341679091c5a880faf6fb5e6087eb1b2dc
*************************** 7. row ***************************
a: 7
b: 230720140707348f14e45fceea167a5a36dedd4bea2543
c: 230720140707348f14e45fceea167a5a36dedd4bea2543
d: 230720140707348f14e45fceea167a5a36dedd4bea2543230720140707348f14e45fceea167a5a36dedd4bea2543
e: 230720140707348f14e45fceea167a5a36dedd4bea2543230720140707348f14e45fceea167a5a36dedd4bea2543
*************************** 8. row ***************************
a: 8
b: 23072014070734c9f0f895fb98ab9159f51fd0297e236d
c: 23072014070734c9f0f895fb98ab9159f51fd0297e236d
d: 23072014070734c9f0f895fb98ab9159f51fd0297e236d23072014070734c9f0f895fb98ab9159f51fd0297e236d
e: 23072014070734c9f0f895fb98ab9159f51fd0297e236d23072014070734c9f0f895fb98ab9159f51fd0297e236d
*************************** 9. row ***************************
a: 9
b: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
c: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
d: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad262307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
e: 2307201407073445c48cce2e2d7fbdea1afc51c7c6ad262307201407073445c48cce2e2d7fbdea1afc51c7c6ad26
*************************** 10. row ***************************
a: 10
b: 23072014070734d3d9446802a44259755d38e6d163e820
c: 23072014070734d3d9446802a44259755d38e6d163e820
d: 23072014070734d3d9446802a44259755d38e6d163e82023072014070734d3d9446802a44259755d38e6d163e820
e: 23072014070734d3d9446802a44259755d38e6d163e82023072014070734d3d9446802a44259755d38e6d163e820
10 rows in set (0.00 sec)

After setting up the database, table and the stored procedure, start the procedure to load data into the table we’ll be using to benchmark fast index creation and drop among MySQL versions.

mysql> call wb.proc01(1000000);
Query OK, 0 rows affected (7 min 31.18 sec)

mysql> select count(*) from wb.tbl01;
+----------+
| count(*) |
+----------+
|  1000000 |
+----------+
1 row in set (1.72 sec)

2. Create some secondary indexes:

Let’s use the column c as the column to be indexed creating an index called i.

##############################################
#
#: creating a secondary index on MySQL 5.0
#: Server version: 5.0.91 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 1000000 rows affected (7 min 33.84 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.0
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 1000000 rows affected (5 min 8.14 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

Just registering here that when I started the same procedure to create objects on MySQL 5.1, I found this error message when calling the procedure:

ERROR 1436 (HY000): Thread stack overrun:  8264 bytes used of a 131072 byte stack, and 128000 bytes needed.  Use 'mysqld -O thread_stack=#' to specify a bigger stack.

I adjusted the environment variable thread_stack to 192K (it’s max value) and restarted mysqld.

##############################################
#
#: creating a secondary index on MySQL 5.1.70
#: Server version: 5.1.70 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 1000000 rows affected (7 min 10.73 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.1.70
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 1000000 rows affected (5 min 12.24 sec)
Records: 1000000  Duplicates: 0  Warnings: 0
##############################################
#
#: creating a secondary index on MySQL 5.5.33
#: Server version: 5.5.33 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 0 rows affected (1 min 21.68 sec)
Records: 0  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.5.33
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 0 rows affected (0.46 sec)
Records: 0  Duplicates: 0  Warnings: 0

###############################################
#: creating a secondary index on MySQL 5.6.17
#: Server version: 5.6.17 MySQL Community Server (GPL)
#
mysql> alter table wb.tbl01 add index i (c);
Query OK, 0 rows affected (1 min 39.08 sec)
Records: 0  Duplicates: 0  Warnings: 0

#
#: dropping a secondary index on MySQL 5.6.17
#
mysql> alter table wb.tbl01 drop index i;
Query OK, 0 rows affected (0.42 sec)
Records: 0  Duplicates: 0  Warnings: 0

The tests conclusion is that new versions have improved along the time and Fast Index Creation really matters when one is taking with secondary indexes. It enforces the concept of the logic/strategy behind InnoDB that uses Primary Key lookups and then, you must have a PK on all the tables – if you don’t state one, InnoDB will elect one of the columns to a PK or will internally create a ROWID column. Secondary indexes can be changed anytime with faster response from the MySQL, making it easier to add/remove in many cases (imagine you’re working to redesign the database model in terms of indexes).

Another point to expose here is that the on disk data is smaller on newer versions. 5.0 and 5.1 had the same behavior regarding the data size on disk, but, on 5.5 and 5.6 the same amount of data reflected a different global size:

[root@mysql56 mysql56]# df -lh
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1              99M   20M   75M  21% /boot
tmpfs                 502M     0  502M   0% /dev/shm
/dev/sdb1             7.9G  1.3G  6.3G  17% /mysql50/datadir
/dev/sdc1             7.9G  1.3G  6.3G  17% /mysql51/datadir
/dev/sdd1             7.9G  744M  6.8G  10% /mysql55/datadir
/dev/sde1             7.9G  874M  6.7G  12% /mysql56/datadir

Index Creation

BLOG_INDEX_CREATION

Dropping Index

BLOG_INDEX_DROPPING

AddThis Social Bookmark Button

How to change the number or size of InnoDB Log Files

fevereiro 27th, 2014 admin Posted in MySQL A&D No Comments »

This week I was approached by a friend who was not aware of the resource available in 5.6 although it’s being very well commented and has been used by many that received that as a very good new feature. In fact, it was a bit intrusive to change transaction log sizes and the # of files when using old versions of MySQL; this is something that sometimes was able to put someone else at risk.

With the 5.6++ that operation became simple since it will resize and create new files automatically after detect new configurations related to the redo/transaction logs files and one doesn’t need to move files anymore, as was done using prior 5.6 versions. From now on, one will just need to change configuration dynamically/using configuration file and restart mysqld.

On 5.6 it’s just a matter of adjust innodb system variables globally, in case of innodb_fast_shutdown to sync log buffer and log files, edit my.cnf to add or update the values of the system variables that handle transaction log behaviour and then, give mysql a restart. I’ve got some log output of all the action that are involved in doing that.

Below you’ll be able to check that innodb_fast_shutdown was configured to make mysqld sync all the buffer content with disk files and then, shutdown. This is the moment in which the adjusts on system variables innodb related was done. When mysqld was started, it read the new configurations and adjusted sizes and the # of transaction logs files.

[root@localhost ~]# mysql -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.6.15-log MySQL Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show variables like 'innodb_log_%';
+-----------------------------+----------+
| Variable_name               | Value    |
+-----------------------------+----------+
| innodb_log_file_size        | 50331648 |
| innodb_log_files_in_group   | 2        |
| innodb_log_group_home_dir   | ./       |
+-----------------------------+----------+
5 rows in set (0.00 sec)

mysql> set global innodb_fast_shutdown=1;
Query OK, 0 rows affected (0.01 sec)

# stopped mysql
# edited configuration file
[mysqld]
innodb_log_file_size=16M
innodb_log_files_in_group=4

# mysqld restart
[root@localhost ~]# /etc/init.d/mysql restart
Shutting down MySQL.. SUCCESS!
Starting MySQL... SUCCESS!

# logs and auto adjust of files
[root@localhost ~]# tail -f /var/lib/mysql/localhost.localdomain.err
2014-02-27 07:10:29 2266 [Note] InnoDB: 128 rollback segment(s) are active.
2014-02-27 07:10:29 2266 [Note] InnoDB: Waiting for purge to start
2014-02-27 07:10:29 2266 [Note] InnoDB: 5.6.15 started; log sequence number 1828674
2014-02-27 07:10:30 2266 [Note] Server hostname (bind-address): '*'; port: 3306
2014-02-27 07:10:30 2266 [Note] IPv6 is available.
2014-02-27 07:10:30 2266 [Note]   - '::' resolves to '::';
2014-02-27 07:10:30 2266 [Note] Server socket created on IP: '::'.
2014-02-27 07:10:30 2266 [Note] Event Scheduler: Loaded 0 events
2014-02-27 07:10:30 2266 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.6.15-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
2014-02-27 07:11:29 2266 [Note] /usr/sbin/mysqld: Normal shutdown

[...]

2014-02-27 07:11:30 2266 [Note] Shutting down plugin 'binlog'
2014-02-27 07:11:30 2266 [Note] /usr/sbin/mysqld: Shutdown complete

140227 07:11:30 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdomain.pid ended
140227 07:11:31 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
2014-02-27 07:11:31 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2014-02-27 07:11:31 2488 [Warning] No argument was provided to --log-bin, and --log-bin-index was not used; so replication may break when this MySQL server acts as a master and has his hostname changed!! Please use '--log-bin=localhost-bin' to avoid this problem.
2014-02-27 07:11:31 2488 [Note] Plugin 'FEDERATED' is disabled.
2014-02-27 07:11:32 2488 [Note] InnoDB: Completed initialization of buffer pool
2014-02-27 07:11:32 2488 [Note] InnoDB: Highest supported file format is Barracuda.
2014-02-27 07:11:32 2488 [Warning] InnoDB: Resizing redo log from 2*3072 to 4*1024 pages, LSN=1828684
2014-02-27 07:11:32 2488 [Warning] InnoDB: Starting to delete and rewrite log files.
2014-02-27 07:11:32 2488 [Note] InnoDB: Setting log file ./ib_logfile101 size to 16 MB
2014-02-27 07:11:32 2488 [Note] InnoDB: Setting log file ./ib_logfile1 size to 16 MB
2014-02-27 07:11:33 2488 [Note] InnoDB: Setting log file ./ib_logfile2 size to 16 MB
2014-02-27 07:11:33 2488 [Note] InnoDB: Setting log file ./ib_logfile3 size to 16 MB
2014-02-27 07:11:33 2488 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2014-02-27 07:11:33 2488 [Warning] InnoDB: New log files created, LSN=1828684
2014-02-27 07:11:33 2488 [Note] InnoDB: 128 rollback segment(s) are active.
2014-02-27 07:11:33 2488 [Note] InnoDB: Waiting for purge to start
2014-02-27 07:11:33 2488 [Note] InnoDB: 5.6.15 started; log sequence number 1828684
2014-02-27 07:11:33 2488 [Note] Server hostname (bind-address): '*'; port: 3306
2014-02-27 07:11:33 2488 [Note] IPv6 is available.
2014-02-27 07:11:33 2488 [Note]   - '::' resolves to '::';
2014-02-27 07:11:33 2488 [Note] Server socket created on IP: '::'.
2014-02-27 07:11:33 2488 [Note] Event Scheduler: Loaded 0 events
2014-02-27 07:11:33 2488 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.6.15-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MySQL Community Server (GPL)
[...]

# checking system variables again
mysql> show variables like 'innodb_log%';
+-----------------------------+----------+
| Variable_name               | Value    |
+-----------------------------+----------+
| innodb_log_buffer_size      | 8388608  |
| innodb_log_compressed_pages | ON       |
| innodb_log_file_size        | 16777216 |
| innodb_log_files_in_group   | 4        |
| innodb_log_group_home_dir   | ./       |
+------------------------

More information about you can find clicking the link below:
http://dev.mysql.com/doc/refman/5.6/en/innodb-data-log-reconfiguration.html

I hope that help!

AddThis Social Bookmark Button

Lock wait timeout exceeded; try restarting transaction

dezembro 26th, 2013 admin Posted in MySQL A&D, MySQL Manutenção No Comments »

It’s very nice when you find some very good and well explained messages in the MySQL error log and the ENGINE INNODB STATUS output. The very good part of the story is to know where you must go to check problems regarding some resources. This is not from today that we’ve seen many messages regarding transaction deadlocks and the connection that is lost in midst of a query execution. This time I used some extra time to execute some tests in order to force MySQL to server me an error explicitly in the mysql client.

As we know, InnoDB is the MySQL’s transactional engine and every transaction has its isolation level well configured by the database administrator or, as happens in the majority of time, the default or standard REPEATABLE READ is used. As the isolation level is beyond this post focus, I’d like to focus the error message around the deadlocks.

Just to put that on the records and give few hints to solve the transaction timeout problem, I played around with the innodb_lock_wait_timeout environment variable which has as a default value 50 seconds; this is the time another transaction will wait to acquire a lock on certain resource,currently locked by another transaction. Imagine a line, if someone is buying a ticket for the show, you must wait that person to finish the buying transaction. But, considering databases, if you’re the second transaction you’ll wait just for innodb_lock_wait_timeout seconds!

Let’s play with that…(I will keep it simple, just to play around…)

mysql> create table test.t1(id int);
Query OK, 0 rows affected (0.10 sec)

mysql> insert into test.t1 set id=1;
Query OK, 1 row affected (0.01 sec)

On terminal A, I started a transaction that will automatically set autocommit to 0, which needs an explicit commit or rollback. My intention here is to lock a resource that is the table test.t1 previously created.

mysql> start transaction;
Query OK, 0 rows affected (0.00 sec)

mysql> update test.t1 set id=1;
Query OK, 2 rows affected (0.00 sec)
Rows matched: 2 Changed: 2 Warnings: 0

On terminal B, I firstly configured innodb_lock_wait_timeout with 1 as its value and then…

mysql> set innodb_lock_wait_timeout=1;
Query OK, 0 rows affected (0.01 sec)

mysql> select @@innodb_lock_wait_timeout;
+----------------------------+
| @@innodb_lock_wait_timeout |
+----------------------------+
|                          1 |
+----------------------------+
1 row in set (0.00 sec)

mysql> insert into test.t1 set id=3;
ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction

This is a typical scenario where deadlocks are happening all the time and it can generate some important performance issues. Before to increase the time innodb_lock_wait_timeout it’s better to check the queries or transactions started by the application so as to fix logic problems. Remember that triggers can be the reason of some problems as this resource will be part of the current transaction as well.

So, just to finish this kidding time, I configured innodb_lock_wait_timeout considering terminal A and B mentioned scenario just to check what ENGINE INNODB STATUS shows up:

------------
TRANSACTIONS
------------
Trx id counter 1826
Purge done for trx's n:o &lt; 1822 undo n:o &lt; 0 state: running but idle
History list length 6
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 7, OS thread handle 0x7f50f05dd700, query id 58 localhost root init
show engine innodb status
---TRANSACTION 1825, ACTIVE 18 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)
MySQL thread id 5, OS thread handle 0x7f50f061e700, query id 56 localhost root update
insert into test.t1 set id=3
------- TRX HAS BEEN WAITING 18 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 6 page no 3 n bits 72 index `GEN_CLUST_INDEX` of table `test`.`t1` trx id 1825 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;

A good advice here in case you’re facing issues like this is to develop to your apps connecting to MySQL a structured code using try/catch to raise the exception which is the transaction that died after waiting to acquire a lock in order to re-execute and not lose data at the end. If you’re seeing errors like these around, make sure to address that asap to avoid data inconsistencies on your databases. The same can happen as well on the replication slave side, which can break the replication by almost the same error in case you have different values for innodb_lock_wait_timeout configured for master and slaves.

Have you played around with some InnoDB variable and got some simple scenario?

Cheers!!

AddThis Social Bookmark Button

MySQL 5.5.X – Sort aborted

dezembro 26th, 2013 admin Posted in MySQL A&D, MySQL Manutenção, MySQL Tuning No Comments »

This morning I started investigating a file sort problem that is happening with a report server. Actually, what caught more my attention was what is really behind of the error message that is appearing many time along MySQL report server’s error log. Yes, this particular server is a slave server used just for extract reports on business data and because that, this kind if server generally is prepared to have good response to read queries which use aggregations COUNT(), SUM(), AVG() and consequently group data by some spacial column. BTW, all the data will be more in memory than on disk and all that story.

But, what is behind the message “[Warning] Sort aborted, host:” ? So, researching for the same case on the internet, I found that some of the problems reported by MySQL on log_error is around these possibilites:

Insufficient disk space in tmpdir prevented tmpfile from being created

This one is easier to check, just df -h /tmp will give you the notice about all the avail space you have at this point at the temporary dir. So, a good question here is, what do I research for when get the notice that there is enough space in /tmp dir? This is the time to get the query what is causing the issue and re execute it, monitoring the /tmp dir and checking if it’s getting full.

Somebody ran KILL in the middle of a filesort

At this point, I agree with Suresh Kuna when he said that “as a DBA, we can’t do much with the first point apart from informing customer to check at the application side for connection drop outs”. The query can be stopped by a error reading packages, a transactions timeout or even a replication slave timeout. Many variables get involved when analysing this kind of problem, but, mainly, problems regarding a user that give up the report’s query in the processing midst.

The server was shutdown while some queries were sorting

When the error is reported to the error log, probably you have an opportunity to observe the timestamp associated with that and then, go through the details on MySQL Server shutdown reading along the error log entries.

A transaction got rolled back or aborted due to lock wait timeout or deadlock

At this point we can consider many things but the main ones are the checking of ENGINE INNODB STATUS which will report all the deadlocks in case you’re using InnoDB to handle database tables or even the log_error that will report message errors regrading occurred deadlocks with local server transaction or if the local server act as a slave, the slave, the message can report that the deadlock is happening with a replicated transaction – innodb_lock_wait_timeout and slave_net_timeout can help with this. Another variables that can be used is slave_transaction_retries which reports if a replication slave SQL thread fails to execute a transaction because of an InnoDB deadlock or because the transaction’s execution time exceeded InnoDB’s innodb_lock_wait_timeout.

Unexpected errors, such as source table or even tmp table was corrupt.

In this case, depending on the size of the involved table (sometimes you won’t be able to know what is the target table just reading the log_error), a simple CHECK TABLE can be effective in get to know if the table has corrupted pages or another errors.

Processing of a subquery failed which was also sorting

This is a classic case in majority of times. The good news is that when a subquery fails to be sorted out it’s a good case to review the value configured to sort_buffer_size. TAKE CARE, do not increase it without checking the reason and the * status variables to study what is the best value to fit the server’s requirements in file sorting process.

Considering what is explained on the online MySQL manual, just increase the sort_buffer_size value when the Sort_merge_passes is greater then zero.

Sheri Cabral wrote about that: http://www.pythian.com/blog/sort_buffer_size-and-knowing-why/

AddThis Social Bookmark Button

MySQL 5.6 Thread Pool

setembro 30th, 2013 admin Posted in MySQL A&D, MySQL Tuning No Comments »

Tendo em vista o problema já discutido aqui neste blog com relação à escala de conexão de usuários versus criação de threads no MySQL versus sistema operacional – no caso, um CentOS 6.0 – decidi recentemente parar para dar uma lida no manual do MySQL e verificar nos mínimos detalhes o que a feature promete. Já havia feito alguns testes há algum tempo atrás, mas, recordar é viver.

O Thread Pool, plugin que integra a versão Enterprise do MySQL oferecida pela Oracle, veio com a intenção de aumentar o poder de escala quando o assunto é quantidade de usuários. Por mais que eu considere que é melhor você resolver consultas mais rapidamente com boa performance do que ficar acumulando usuários no sistema de gerenciamento de bancos de dados e assim, causar um processamento muito mais acentuado por via da criação de threads, ainda assim temos que contar com os long-running-statements que podem tomar grande parte dos recursos do host de servidor de bancos de dados.

A intenção do plugin é fazer com que o MySQL escala mais com mais quantidade de conexões realizadas. Segundo o que diz o manual, quanto mais conexões, mais estável e mais rápido será a resposta do engine (mysqld). Antes então do que mais interessa, alguns pontos de atenção:

  • O Thread Pool não vem habilitado por padrão, você precisa configurar a leitura do plugin;
  • Suas variáveis de ambiente somente serão carregadas caso o plugin seja carregado;

Após habiltar o plugin, verifique as variáveis de ambiente e entenda o que cada uma delas faz.

Saiba mais através do manual. Não vou tratar dos detalhes pois, minha ansiedade aqui é exibir que o recursos realmente tem um resultado muito bom e com isso, exibo abaixo os resultados de um pequeno benchmark com o mysqlslap…

[root@threadpool ~]# mysqlslap --user=root --password=123456 --auto-generate-sql --concurrency=100,150,200,250,300 --number-of-queries=2000
Warning: Using a password on the command line interface can be insecure.
Benchmark
Average number of seconds to run all queries: 2.675 seconds
Minimum number of seconds to run all queries: 2.675 seconds
Maximum number of seconds to run all queries: 2.675 seconds
Number of clients running queries: 100
Average number of queries per client: 20

Benchmark
Average number of seconds to run all queries: 2.224 seconds
Minimum number of seconds to run all queries: 2.224 seconds
Maximum number of seconds to run all queries: 2.224 seconds
Number of clients running queries: 150
Average number of queries per client: 13

Benchmark
Average number of seconds to run all queries: 2.363 seconds
Minimum number of seconds to run all queries: 2.363 seconds
Maximum number of seconds to run all queries: 2.363 seconds
Number of clients running queries: 200
Average number of queries per client: 10

Benchmark
Average number of seconds to run all queries: 2.035 seconds
Minimum number of seconds to run all queries: 2.035 seconds
Maximum number of seconds to run all queries: 2.035 seconds
Number of clients running queries: 250
Average number of queries per client: 8

Benchmark
Average number of seconds to run all queries: 1.984 seconds
Minimum number of seconds to run all queries: 1.984 seconds
Maximum number of seconds to run all queries: 1.984 seconds
Number of clients running queries: 300
Average number of queries per client: 6

 

O próximo passo é verificar a quantidade de consultas estagnadas (stalled) através da tabela INFORMATION_SCHEMA.TP_THREAD_GROUP_STATS, que somente estará disponível caso o servidor esteja utilizando o Thread Pool plugin.

mysql> call test.stalledThreads;
+-------------------------------------------------------+
| SUM(STALLED_QUERIES_EXECUTED) / SUM(QUERIES_EXECUTED) |
+-------------------------------------------------------+
| 0.0000                                                |
+-------------------------------------------------------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.00 sec)

Sem stalled queries, em breve vou postar o Thread Pool in action, até!

AddThis Social Bookmark Button

Got an error reading communication packets

julho 12th, 2012 admin Posted in MySQL A&D, MySQL Manutenção No Comments »

O nome desse post é exatamente a mensagem de erro que você provavelmente poderá receber ao verificar o estado de saúde do seu servidor de bancos de dados MySQL, nesse caso, um MySQL 5.0. Na semana atual estou trabalhando com um cliente localizado no Brasil que tem cerca de 1502 conexões simultâneas no MySQL, este que é o repositório de informações de um ERP que centraliza as operações da empresa. São várias lojas acessando um mesmo MySQL configurado com um repositório central – obviamente, anteriormente, este cliente passou a operara com servidores em replicação, onde temos um servidor MASTER e outros 7 SLAVEs, cada qual com funções distintas.

Enfim, independentemente da arquitetura do cliente, encontramos um problema logo depois que a mesma começou a rodar. Ao consultar o logo de erro do MySQL, encontramos o seguinte cenário:

root@master1:/var/log# tail -f /var/log/mysql/mysql.err
120712 14:22:55 [Warning] Aborted connection 173570 to db: 'unconnected' user: 'sink01' host: '' (Got an error reading communication packets)
120712 14:23:15 [Warning] Aborted connection 173025 to db: 'unconnected' user: 'sink01' host: '' (Got an error reading communication packets)
120712 14:27:48 [Warning] Aborted connection 169655 to db: 'unconnected' user: 'sink01' host: '' (Got an error reading communication packets)
120712 14:29:00 [Warning] Aborted connection 165547 to db: 'sqldados' user: 'root' host: '' (Got an error reading communication packets)
120712 14:29:23 [Warning] Aborted connection 172752 to db: 'unconnected' user: 'sink02' host: '' (Got an error reading communication packets)
120712 14:30:27 [Warning] Aborted connection 173886 to db: 'unconnected' user: 'sink01' host: '' (Got an error reading communication packets)
120712 14:31:54 [Warning] Aborted connection 174079 to db: 'unconnected' user: 'sink18' host: '' (Got an error reading communication packets)
120712 14:34:16 [Warning] Aborted connection 171530 to db: 'sqldados' user: 'root' host: '' (Got an error reading communication packets)

Inicialmente, pensamos ser um problema de latência de rede onde a conexão para leitura e escrita estavam sen fechadas, mesmo com o status da thread continuando em SLEEP. Sendo assim, ajustamos as variáveis net_% do MySQL. O primeiro passo foi resetar todas elas:

mysql> set net_buffer_length = DEFAULT;
Query OK, 0 rows affected (0.00 sec)

mysql> show global variables like 'net%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| net_buffer_length | 16384 |
| net_read_timeout  | 60    |
| net_retry_count   | 10    |
| net_write_timeout | 60    |
+-------------------+-------+
4 rows in set (0.01 sec)

Para testarmos a eliminação do erro, configuramos as variáveis net_read_timeout e net_write_timeout com um valor maior:

mysql> set global net_write_timeout=360;
Query OK, 0 rows affected (0.00 sec)

mysql> set global net_write_timeout=360;
Query OK, 0 rows affected (0.00 sec)

mysql> show global variables like 'net%';
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| net_buffer_length | 16384 |
| net_read_timeout  | 360   |
| net_retry_count   | 10    |
| net_write_timeout | 360   |
+-------------------+-------+
4 rows in set (0.00 sec)

Mesmo assim, o erro não foi corrigido e acompanhando o log de erro com tail -f, ele voltou a aparecer…a solução foi ajustar o max_allowed_packet para suportar pacotes maiores e então o erro foi corrigido.

mysql> select concat(format(@@max_allowed_packet/1024/1024,2),'MB') "max_allowed_packet";
+--------------------+
| max_allowed_packet |
+--------------------+
| 16.00MB            |
+--------------------+
1 row in set (0.01 sec)

mysql> set max_allowed_packet=128*1024*1024;
Query OK, 0 rows affected (0.00 sec)

mysql> select concat(format(@@max_allowed_packet/1024/1024,2),'MB') "max_allowed_packet";
+--------------------+
| max_allowed_packet |
+--------------------+
| 128.00MB           |
+--------------------+
1 row in set (0.01 sec)

Após isto, observamos o log por mais 2 horas e não houve mais ocorrência do erro “Got an error reading communication packets”. Vale salientar também que este erro pode ser causado quando o aplicativo que se conecta ao MySQL não finaliza uma conexão de maneira apropriada (sem um mysql_close(), por exemplo), incrementando a variável de status Aborted_clients.

mysql> show global status like 'Aborted%';
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| Aborted_clients  | 2866  |
| Aborted_connects | 17    |
+------------------+-------+
2 rows in set (0.00 sec)

AddThis Social Bookmark Button

Estressando o MySQL com o mysqlslap

junho 15th, 2012 admin Posted in MySQL A&D, MySQL Manutenção, MySQL Tuning 2 Comments »

Não é de hoje que é necessário efetuar vários testes antes de colocar um servidor em produção e para isso, as vezes os testes que a turma de desenvolvimento elabora não são os melhores na visão do administrador de bancos de dados. Na verdade, os dois times precisam estar juntos e alinhados para a realização de tal tarefa para que nada escape aos olhos e ao entendimento de ambos os pontos de vista, tanto da aplicação quanto do banco de dados, sendo que, testes de estresse ou ainda, os benchmarks, são um fator determinante para que um produto para ser eleito como solução ou não.

Nessa semana tivemos um interação interessante com um cliente no Brasil que precisou ter certeza de que um servidor de bancos de dados MySQL poderia entrar em produção para atender a uma grande demanda e por isso, nos chamou, para rever toda a configuração, além de corrigir métricas de performance, revisar discos, memória e poder de processamento. É isso, após o trabalho utilizamos o “mysqlslap” que é uma suite de benchmark nativa do MySQL, disponibilizada juntamente com vários programas clientes e não clientes no momento da instalação do servidor de bancos de dados mais popular do mundo. Várias são as opções que podem ser utilizadas com o mysqlslap que ao ser utilizado a primeira vez, cria uma base de dados para administrar os seus próprios metadados.

O que quero mostrar aqui é que, após executar uma auditoria e um bom tuning na instância de MySQL do cliente, que roda em Red Hat 6, rodamos alguns scripts personalizados, criados pela WBConsultinga para otimizar dados em páginas de dados e atualizar estatísticas de objetos, iniciamos os testes com o mysqlslap, primeiro para verificar se teríamos problemas com o número de conexão simultâneas de usuários de 3000 que o cliente requisitou para o sistema.

Executamos então o primeiro teste com 3000 clientes disparando 1000 consultas em conexão simultânea…

[root@mysqlsrv101 ~]# mysqlslap --user=root --password=XXX --auto-generate-sql --concurrency=3000 --number-of-queries=1000
Benchmark
Average number of seconds to run all queries: 33.098 seconds
Minimum number of seconds to run all queries: 33.098 seconds
Maximum number of seconds to run all queries: 33.098 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

O tempo de uma iteração único poderá ser alto se imaginarmos que temos consultas ad hoc. Mas, para isso, o mysqlslap tem uma opção que permite controlar quantas vezes você deseja repetir aquela mesma iteração (-i ou –itereations). Executamos  -i 5 e assim, notamos que os ajustes de caches e buffers estão trabalhando bem…

[root@mysqlsrv101 ~]# mysqlslap --user=root --password=XXX --auto-generate-sql --concurrency=3000 --auto-generate-sql-write-number=100 -i 5
Benchmark
Average number of seconds to run all queries: 19.387 seconds
Minimum number of seconds to run all queries: 17.967 seconds
Maximum number of seconds to run all queries: 22.998 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

Tivemos então os tempos médio (average) mínimo (minimum) mais baixos que executando consultas ad hoc. Consultando as variáveis de status do MySQL, percebemos que muita informação foi agregada às estruturas de memória, tanto para o InnoDB Buffer Pool quanto para o MyISAM Key Buffer.

mysql> show status like 'Innodb_buffer_pool%';
+---------------------------------------+-----------+
| Variable_name                         | Value     |
+---------------------------------------+-----------+
| Innodb_buffer_pool_pages_data         | 5638      |
| Innodb_buffer_pool_pages_dirty        | 0         |
| Innodb_buffer_pool_pages_flushed      | 13895     |
| Innodb_buffer_pool_pages_free         | 518648    |
| Innodb_buffer_pool_pages_misc         | 1         |
| Innodb_buffer_pool_pages_total        | 524287    |
| Innodb_buffer_pool_read_ahead_rnd     | 0         |
| Innodb_buffer_pool_read_ahead         | 0         |
| Innodb_buffer_pool_read_ahead_evicted | 0         |
| Innodb_buffer_pool_read_requests      | 764868549 |
| Innodb_buffer_pool_reads              | 1865      |
| Innodb_buffer_pool_wait_free          | 0         |
| Innodb_buffer_pool_write_requests     | 665820    |
+---------------------------------------+-----------+
13 rows in set (0.01 sec)

mysql> show status like 'Key_%';
+------------------------+---------+
| Variable_name          | Value   |
+------------------------+---------+
| Key_blocks_not_flushed | 1023    |
| Key_blocks_unused      | 17      |
| Key_blocks_used        | 2514736 |
| Key_read_requests      | 0       |
| Key_reads              | 2876589 |
| Key_write_requests     | 4566867 |
| Key_writes             | 4567890 |
+------------------------+---------+
7 rows in set (0.00 sec)

Finalmente, um teste de evolução de conexões simultâneas, inciando em 500, indo a 1000, 1500 e finalmente para 3000:

[root@mysqlsrv101 ~]# mysqlslap --user=root --password=XXX --auto-generate-sql --concurrency=500,1000,1500,3000 --number-of-queries=100
Benchmark
Average number of seconds to run all queries: 3.084 seconds
Minimum number of seconds to run all queries: 3.084 seconds
Maximum number of seconds to run all queries: 3.084 seconds
Number of clients running queries: 500
Average number of queries per client: 0

Benchmark
Average number of seconds to run all queries: 4.054 seconds
Minimum number of seconds to run all queries: 4.054 seconds
Maximum number of seconds to run all queries: 4.054 seconds
Number of clients running queries: 1000
Average number of queries per client: 0

Benchmark
Average number of seconds to run all queries: 6.993 seconds
Minimum number of seconds to run all queries: 6.993 seconds
Maximum number of seconds to run all queries: 6.993 seconds
Number of clients running queries: 1500
Average number of queries per client: 0

Benchmark
Average number of seconds to run all queries: 16.021 seconds
Minimum number of seconds to run all queries: 37.092 seconds
Maximum number of seconds to run all queries: 22.008 seconds
Number of clients running queries: 3000
Average number of queries per client: 0

O resumo da utilização de recursos foi:

Máxima de CPU ao final dos testes: 49%
Máxima de Taxa de IO: 42%
Máxima de utilização de Memória: 70%
Máxima de Swap: 0%

Conseguimos acertar o número de conexões simultâneas que o cliente precisava ajustando as variáveis @@max_connections e @@max_user_connections de acordo com o que é necessário. O mysqlslap nos auxiliou para colocar o MySQL nos limites que o projeto requisitou e comprovar que o servidor de bancos de dados estava pronto para entrar em produção.

AddThis Social Bookmark Button

Agilizando a carga de dados e restore no MySQL

maio 13th, 2012 admin Posted in MySQL A&D, MySQL Backup, MySQL Manutenção, MySQL Tuning No Comments »

Muitos são os amigos que escrevem perguntando como agilizar a carga de dados ou restore de um backup no MySQL. Realmente, dependendo do tamanho do seu hardware, configuração dos Storage Engines e variáveis per-client e design do seu banco de dados, esse processo poderá levar várias horas caso alguns cuidados não sejam tomados antes do início do processo. Há pouco tempo atrás, trabalhando em uma consultoria aonde o cliente precisava fazer uma carga diária de toda a movimentação nas contas onde todas as informações eram consistidas em arquivos texto, finalizamos a nossa prestação de serviços após termos desenvolvido um aplicativo que, além de fazer a carga dos dados e vários tratamentos em meio aos LOAD DATA INFILE, configuramos vários pontos do MySQL no runtime do aplicativo para que o processo fosse realmente “agilizado”.

São vários os pontos a serem observados:

  • Índices KEY, UNIQUE e FULLTEXT, para tabelas MyISAM;
  • Chaves estrangeiras ou foreign keys, para tabelas InnoDB;
  • o modo AUTOCOMMIT, para tabelas InnoDB.
Para os testes que faremos neste post, utilizaremos uma máquina virtual rodando o CentOS 6.0, com o MySQL 5.6.

[root@master ~]# mysqladmin -u root -p123456 version
mysqladmin Ver 8.42 Distrib 5.6.4-m7, for Linux on i686


Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version 5.6.4-m7-log
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/lib/mysql/mysql.sock
Uptime: 42 min 17 sec

Para desabilitar Índices KEY e UNIQUE, basta que você crie um select ou mesmo um script para percorrer tabela por tabela do seu modelo físico de bancos de dados para desabilitar os índices de cada uma delas. Gosto de fazer isso via mysql client com a opção -B maiúsculo, que executa a conexão com o mysqld em modo batch. Caso você tenha mais segurança em utilizar os recursos do MySQL em conjunto com o file system, você pode utilizar o SELECT … INTO OUTFILE.

# criamos as tabelas com índices KEY, ou seja, índices que são estruturas utilizadas para melhoria da performance na busca de dados
[root@master ~]# for i in {1..5}; do mysql -u root -p123456 test -e "create table tb$i(id$i int,key(id$i)) engine=myisam;"; done

# exibimos as tabelas criadas
[root@master ~]# mysql -u root -p123456 -e "show tables from test like 'tb%'"
+----------------------+
| Tables_in_test (tb%) |
+----------------------+
| tb1                  |
| tb2                  |
| tb3                  |
| tb4                  |
| tb5                  |
+----------------------+

# exibimos os índices criados nas colunas id das tabelas que acabamos de criar
[root@master ~]# mysql -u root -p123456 -e "select column_name, column_key from information_schema.columns where table_schema='test' and table_name like 'tb%'"
+-------------+------------+
| column_name | column_key |
+-------------+------------+
| id1         | MUL        |
| id2         | MUL        |
| id3         | MUL        |
| id4         | MUL        |
| id5         | MUL        |
+-------------+------------+

Agora que temos índices à desabilitar, podemos rodar um SELECT que nos devolverá os comandos ALTER TABLE necessários para desabilitar os índices das tabelas do banco de dados alvo da carga de dados.

# executando em modo batch
[root@master ~]# mysql -u root -p123456 -B -e "select concat('alter table ',table_name,' disable keys;') from information_schema.tables where table_schema='test'"
concat('alter table ',table_name,' disable_keys;')
alter table t1 disable keys;
alter table t2 disable keys;
alter table t3 disable keys;
alter table tb1 disable keys;
alter table tb2 disable keys;
alter table tb3 disable keys;
alter table tb4 disable keys;
alter table tb5 disable keys;

# executando com SELECT ... INTO OUFILE
[root@master ~]# mysql -u root -p123456 -e "select concat('alter table ',table_name,' disable keys;') into outfile '/tmp/alterDisableKey' from information_schema.tables where table_schema='test'"
[root@master ~]#

Considerando a segunda opção, volte ao mysql e execute o conteúdo do arquivo que foi salvo em /tmp
# executando o arquivo via source

[root@master ~]# mysql -u root -p123456 test -e "source /tmp/alterDisableKey;"
# confirmando que os índices foram desabilitados

mysql> show index from tb1\G
*************************** 1. row ***************************
Table: tb1
Non_unique: 1
Key_name: id1
Seq_in_index: 1
Column_name: id1
Collation: A
Cardinality: NULL
Sub_part: NULL
Packed: NULL
Null: YES
Index_type: BTREE
Comment: disabled # desabilitado!
Index_comment:
1 row in set (0.00 sec)

Após realizar a carga de dados, ALTER TABLE <table_name> ENABLE KEYS!

Para que as foreign keys ou chaves estrangeiras em tabelas InnoDB tenham suas checagens desabilitadas (o processo de checagem de integridade referencial realmente atrasa o restore de dados) é um processo mais tranquilo que o anterior. Basta que você, na sua sessão, reconfigure o valor da variável de ambiente foreign_key_checks, como vemos logo abaixo:

mysql> show variables like 'foreign%';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| foreign_key_checks | ON    |
+--------------------+-------+
1 row in set (0.00 sec)

mysql> SET FOREIGN_KEY_CHECKS=OFF;
Query OK, 0 rows affected (0.05 sec)

mysql> SET FOREIGN_KEY_CHECKS=0;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'foreign%';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| foreign_key_checks | OFF   |
+--------------------+-------+
1 row in set (0.00 sec)

O ponto final para finalizarmos este post, o AUTOCOMMIT! Primeiro, vamos entender o que esse cara faz e o que ele controla. Como o InnoDB é um Storage Engine transacional, a cada UPDATE, INSERT ou DELETE que é executado, o InnoDB cuida para enviar um COMMIT logo após tais consultas; isso, quando AUTOCOMMIT está configurado como 1 ou ON, que é o valor default. Como queremos fazer várias operações e somente ao final dar um COMMIT explícito, o que é feito pelo mysqldump com um arquivo de backup gerado com a opção -e, precisamos configurar o AUTOCOMMIT com o valor OFF ou 0.

# configurando autocomit no arquivo de configuração do MySQL, salev o mesmo e reinicie o MySQL
[root@master ~]# vim /etc/my.cnf

[mysqld]
autocommit=0

[root@master ~]# service mysql restart
Shutting down MySQL ... [ OK ]
Starting MySQL      ... [ OK ]

Pronto, agora o seu servidor de bancos de dados MySQL já está configurado para passar por processos de restore de forma mais rápida e também ser alvo de cargas de dados pesadas. Um adicional é, procure saber como funciona a variável bulk_insert_buffer_size, ela também ajudará neste quesito.

Até.

 

AddThis Social Bookmark Button

Little comments about InnoDB

novembro 8th, 2011 admin Posted in MySQL A&D No Comments »

Hello everyone, here we go for a new blog post and now treating about InnoDB inside MySQL 5.5 and as you should now, this new version uses InnoDB Plugin version 1.1 which one has a lot of new adjustable resources. What most caught my attention was the impressed way that users could adjust it to have a lot more performance than MyISAM, for example. It was benchmarked here.

At the beginning, when Oracle announced about the new default Storage Engine (InnoDB, bingo!), many users were scared and started to ask about why is that change really necessary. Many other users just nod heads for it and now we have a good proof of this necessity – to have more scalesecurityand reliability.

Scale [Up] because the data could be compressed to use more memory and processor than disk (it avoid overhead), more transactions could be started concurrently and more CPU cores can be addressed as MySQL 5.5.4 is better prepared now to scale up to 32 cores. You can read about it accessing DimitriK’s (dim) Weblog.

innodb_more_cores

Security is noted when you compare InnoDB with MyISAM because with InnoDB you will have good performance with safe and crash recovery, using transactions logs and data and indexes inside a tablespace, what will improve besides security, availability too.

I remember you to use new file format (innodb_file_format) configured as Barracuda in order to provide all new functionality to your environment. Unlike MyISAM, InnoDB has its own transaction logs which by default are created inside DATADIR (normally at /var/lib/mysql). If you specify no InnoDB configuration options, an auto-extending 10MB datafile named `ibdata1'and two 5MB log files named `ib_logfile0' and `ib_logfile1' in the MySQL data directory – DATADIR. As long as a transaction receives a COMMIT or a ROLLBACK, a checkpoint is created, transaction is registered or rolled back and the life goes on.

The InnoDB’s behavior at this point will depend on some interesting configurations: innodb_log_buffer_size (to maintain transactional data into the log buffer), innodb_max_dirty_pages_pct (the percentage of dirty pages can remain into the buffer pool), innodb_flush_log_at_trx_commit (the way data will be flushed to disc, it accepts values from 0 to 2), and innodb_flush_method, that may decide how to open files and flush all dirty pages have been modified since last flush process.

The InnoDB parameterization have been showing that we have lots of combination to have more performance in certain conditions.  When you’re about to restore large databases, it’s good to have innodb_flush_method=O_DSYNC although it will increase swap as much as possible. To have good performance on restoring a database make sure to disable unique and foreign key checks, configure autocommit appropriately and create a backup with “-e” option (when use mysqldump).

You can use the following variables to handle InnoDB configuration and behavior:

[mysqld]

# innodb file new features configuration
innodb_file_per_table = BARRACUDA # it will "turn on" all InnoDB Plugin new features
innodb_file_per_table = 1 # it will "turn on" a tablespace file per database table

# innodb log file configuration
innodb_log_group_home_dir=/var/log/mysql/innodb # where files will end up
innodb_log_files_in_group=8 # the amount of log files current instance will have
innodb_log_file_size=500M # the total of innodb_files_in_group * innodb_log_file_size can't be greater than or equal to 4096M - 4G

# innodb log buffer configuration - tinkling about a circle per created log file before flushing process
innodb_log_buffer_size=1024M # considering an environments with large transactions, making this variable large will save disk I/O, click here to know more how to calculate it better
innodb_flush_method=O_DIRECT # avoid OS Buffer Cache and too much RAM dedicated to it
#

Reliability cause these all features together to deliver good set of subsystems to have good performance, what can be achieved using innodb_file_per_table to create a tablespace file per table, less I/O in this case, the capacity to compress data using less space into tablespace segments, expends less extents and fit memory with more data.

A good touch, on broad terms is that InnoDB can be configured to use external disks as SAN or other machines to storage its structure and data. Using certain variables you can, for example, put InnoDB files on another disks to get more performance. These below variables will become it possible:

[mysqld]
innodb_data_home_dir  = /nfs1/innodb
innodb_data_file_path = /ibdata/ibdata1:50M;/ibdata/ibdata2:50M:autoextend

Following good practices, it is really important whether you can to separate data and transaction logs onto different disks.

AddThis Social Bookmark Button

Compressão de dados entre Storage Engines

novembro 8th, 2011 admin Posted in MySQL A&D No Comments »

Muitos são os problemas quando se tem uma empresa que utiliza qualquer um dos produtos de bancos de dados existentes no mercado e não libera os investimentos necessários em estrutura, sendo a questão relacionada a falta de espaço em disco um dos maiores incidentes ocorridos em bases de dados. Com isso, já que não há recursos para que os discos sejam expandidos ou mesmo, um espaço em cloud seja disponibilizado, o administrador de bancos de dados deverá se munir de competência necessária para analisar a base de dados e utilizar a melhor compressão de dados que um dentre todos os storage engines nativos do MySQL apresenta. Estou introduzindo este post para falar sobre o Archive Storage Engine.

Tenho um cliente em especial que tem uma grande base de dados para atender a um sistema de work-flow, ou seja, este sistema armazena dados em um banco de dados localizado em uma instância do MySQL, sendo que, temos lá várias tabelas de movimentação de dados e outra que recebem somente INSERT e SELECT – ESCRITA e LEITURA – que são dados de parametrização e programação da produção das várias esteiras aonde os insumos de produção são encaixados no similar para que este se torne ao final, um produto acabado. Bom, sem este parâmetros, o sistema não consegue, então, filtrar as requisições dos produtos que devem ser mais ou menos produzidos e as suas características. Estas tabelas que recebem somente o INSERT de dados, todos os dias são alimentadas pela programação de produção realizada pelos engenheiros e assim por diante.

Pensando em economizar espaço, já que os discos não poderiam ser adquiridos neste momento (acho que nem tão cedo), pensei que poderíamos pegar as tabelas de parâmetros de produção – que representam 38% do espaço em disco alocado para dados e índices – e transformá-las de InnoDB para Archive, mas não tinha noção do quanto tais tabelas poderiam ter seus dados comprimidos. Assim, realizei o seguinte teste:

1-) Criei um tabela tb_innodb com uma coluna id do tipo INT, controlada pelo engine InnoDB;

2-) Criei um tabela tb_myisam com uma coluna id do tipo INT, controlada pelo engine MyISAM;

3-) Criei um tabela tb_archive com uma coluna id do tipo INT, controlada pelo engine Archive;

4-) Criei uma stored procedure para dar carga de 1.000.000.000.000 de linhas nas tabelas criadas;

Ao final, eu tinha como consultar o INFORMATION_SCHEMA, utilizando as informações da tabela TABLES para verificar o tamanho de dados e das duas tabelas e comparar os resultados com foco na compressão de ambos os engines. O resultado foi o seguinte:

Veja que, enquanto os dados recuperados em megabytes de uma tabela InnoDB tem um tamanho de pouco mais de 30MB, uam tabela com o mesma quantidade de dados, mas controlada pelo Storage Engine Archive tem menos de 10% daquele tamanho. Coloquei a tabela MyISAM no meio deste teste somente para ter também uma noção do seu nível de compressão. Assim, conseguimos liberar grande quantidade de espaço em disco para tabela que sofrem somente SELECT e INSERT.

Atenção: tabelas Archive somente aceitarão SELECT e INSERT, outros comandos falharão!

AddThis Social Bookmark Button

MySQL 5.6 and the new MySQL Partitioning resources

novembro 8th, 2011 admin Posted in MySQL A&D No Comments »

There are lots of new features on MySQL 5.6 related to the MySQL Partition Engine that we can apply on database tables. With MySQL new version, besides the performance improvements provided by the partitioned tables (which resource we have since MySQL 5.1), the database administrators might improve their environments architecture & design in order to better retrieve information selecting data just from the specific table partition – partition pruning.

We will start this post creating a table that will store data from the product payment system’s process and it will be partitioned using the RANGE() partition function with the MONTH() function nested. If you want to know more about partitioned tables using the RANGE() partition function, click here (a post written in Portuguese).

Scenario

Imagine you’re developing a new database to support a system responsible to be the interface with sales department. Obviously, that system must be as faster as it can to get round expending customers time or lose the opportunity to sell more products (generally sales guys are very greedy and the organization platform systems must support them). With this point in mind, we’ll create the following [example] table in order to fit some performance requirements using a partitioning example by month():

[root@innodbserver mysql]# mysql -u root
Welcome to the MySQL monitor.
Your MySQL connection id is 1
5.6.2-m5-log MySQL Community Server (GPL)

mysql> use test
Database changed

mysql> CREATE TABLE t1 (
    -> id int not null auto_increment,
    -> value decimal(10,2) not null,
    -> payment_date datetime not null,
    -> PRIMARY KEY(id,payment_date)
    -> ) PARTITION BY RANGE(MONTH(payment_date)) (
    -> PARTITION p0 VALUES LESS THAN(02),
    -> PARTITION p1 VALUES LESS THAN(03),
    -> PARTITION p2 VALUES LESS THAN(04),
    -> PARTITION p3 VALUES LESS THAN(05),
    -> PARTITION p4 VALUES LESS THAN(06),
    -> PARTITION p5 VALUES LESS THAN(07),
    -> PARTITION p6 VALUES LESS THAN(08),
    -> PARTITION p7 VALUES LESS THAN(09),
    -> PARTITION p8 VALUES LESS THAN(10),
    -> PARTITION p9 VALUES LESS THAN(11),
    -> PARTITION P10 VALUES LESS THAN(MAXVALUE)
    -> );
Query OK, 0 rows affected (5.73 sec)

Let’s load some data into the table so as we can work with some partitioning features. Perhaps soon I may update this post with a stored procedure to populate table’s partitions in a WHILE loop.

insert into test.t1 set id=null, value='1.00', payment_date=date_sub(now(), interval 1 month);
insert into test.t1 set id=null, value='2.00', payment_date=date_sub(now(), interval 2 month);
insert into test.t1 set id=null, value='3.00', payment_date=date_sub(now(), interval 3 month);
insert into test.t1 set id=null, value='4.00', payment_date=date_sub(now(), interval 4 month);
insert into test.t1 set id=null, value='5.00', payment_date=date_sub(now(), interval 5 month);
insert into test.t1 set id=null, value='6.00', payment_date=date_sub(now(), interval 6 month);
insert into test.t1 set id=null, value='7.00', payment_date=date_sub(now(), interval 7 month);
insert into test.t1 set id=null, value='8.00', payment_date=date_sub(now(), interval 8 month);
insert into test.t1 set id=null, value='9.00', payment_date=date_sub(now(), interval 9 month);
insert into test.t1 set id=null, value='10.00', payment_date=date_sub(now(), interval 10 month);
insert into test.t1 set id=null, value='11.00', payment_date=date_sub(now(), interval 11 month);
insert into test.t1 set id=null, value='12.00', payment_date=date_sub(now(), interval 12 month);

And now, the new resource supported in MySQL 5.6 – how to retrieve data from partitioned table selecting rows just from a specific partition:

mysql> select id, concat('R$ ',value) as amount, payment_date from test.t1 partition(p5);
+----+---------+---------------------+
| id | amount  | payment_date        |
+----+---------+---------------------+
| 1  | R$ 1.00 | 2014-06-22 21:19:26 |
+----+---------+---------------------+
1 row in set (0.00 sec)

You can check existing partition names, expressions and current rows querying INFORMATION_SCHEMA.PARTITIONS table:

mysql> select table_schema, table_name, partition_name, table_rows
    -> from information_schema.partitions where table_name='t1' and table_schema='test'\g
+--------------+------------+----------------+------------+
| table_schema | table_name | partition_name | table_rows |
+--------------+------------+----------------+------------+
| test         | t1         | p0             |          1 |
| test         | t1         | p1             |          1 |
| test         | t1         | p2             |          1 |
| test         | t1         | p3             |          1 |
| test         | t1         | p4             |          1 |
| test         | t1         | p5             |          1 |
| test         | t1         | p6             |          1 |
| test         | t1         | p7             |          1 |
| test         | t1         | p8             |          1 |
| test         | t1         | p9             |          1 |
| test         | t1         | P10            |          2 |
+--------------+------------+----------------+------------+
11 rows in set (0.00 sec)

I created a table partitioned by RANGE() partition function and it is using the MONTH() MySQL built-in function nested, which will become impossible MySQL Partition Engine to use the engine resource called Partitioning Pruning. It is true, but, since we are stating from what partition it will retrieve data, partition pruning doesn’t care in this case. To have better results, I’ll insert some new rows into the created table and then, I’ll SELECT rows from table using EXPLAIN in two scenarios, (1) it will read rows from a specific partition to force the partition pruning resource and (2) it will read rows from all partitions – you’ll check this observing the output of the EXPLAIN PARTITIONS…

See this below:

#
#: selecting rows from all partitions
#
mysql> explain partitions select id, concat('R$ ', value) as amount, payment_date from test.t1 order by value\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: t1
   partitions: p0,p1,p2,p3,p4,p5,p6,p7,p8,p9,P10
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 12
        Extra: Using filesort
1 row in set (0.00 sec)

#
#: selecting rows from a specific partition - partition pruning
#
mysql> explain partitions select id, concat('R$ ', value) as amount, payment_date from test.t1 partition(p6)\G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: t1
   partitions: p6
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 2
        Extra: NULL
1 row in set (0.00 sec)

This resource or feature can be considered by the database administrator to improve the database design and queries’ performance. It’s much better to select rows from a single partition than scan all the index or table searching for rows. Not just for SELECT, but when one thinks about a good strategy for data purge or move a slice of the table’s data to a history database, it’s possible to just exchange partition among partitioned tables or even drop/truncate it.

mysql> alter table test.t1 drop partition p10;
Query OK, 0 rows affected (1.71 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> select table_name, partition_name from information_schema.partitions where table_name='t1' and table_schema='test';
+------------+----------------+
| table_name | partition_name |
+------------+----------------+
| t1         | p0             |
| t1         | p1             |
| t1         | p2             |
| t1         | p3             |
| t1         | p4             |
| t1         | p5             |
| t1         | p6             |
| t1         | p7             |
| t1         | p8             |
| t1         | p9             |
+------------+----------------+
10 rows in set (0.01 sec)

As you can see, we can force the partition pruning when using named partitions on SELECT through the functions PARTITION(). This feature is planned to MySQL 5.6 and you can download that version from MySQL Labs ->http://labs.mysql.com/

AddThis Social Bookmark Button