MariaDB MaxScale like a Pro: Setting up MaxScale 2.3
I created this series of blog posts after being worked with the MariaDB MaxScale for many customers. All the points mentioned here reflect my views; I’ll add links to the online docs so we can have an official reference. I intend to share my experiences working with MaxScale; we need more practical documentation so we improve the MaxScale usage and transfer knowledge.
First of all, MaxScale 2.3 release, notes, take a look.
Something you need to know before starting with the praxis here – all instances are running Debian 9.
root@prod-mariadb01:~# lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 9.9 (stretch) Release: 9.9 Codename: stretch
MariaDB MaxScale in a nutshell…
The MariaDB MaxScale is an intelligent database proxy which understands SQL language and has a bunch of bundle modules known as routers or services, monitors, filters, etc. After setting up the MaxScale packages, you have access to all bundle modules, as you need only to add a basic configuration file and start the service. Since version 2.1, you don’t need to elaborate a complete configuration file (/etc/maxscale.cnf) to start the service. Once you get a configuration file with the global [maxscale] section and at least one service defined, you can start the MaxScale service.
What are we going to be doing for setting up the MariaDB MaxScale?
- Add the MariaDB Official Repository;
- Create the needed users on the database servers (you will see soon that I’m considering you already a Master/Slave running environment);
- Create the .secrets file so we can encrypt the passwords for the users on the maxscale.cnf;
- Create a basic configuration file for MaxScale and start the process;
- Run dynamic commands so we can create a monitor (MariaDBMon), the servers, a listener, and link created servers with the monitor and the service.
Moving on…
By chance, you can be running something like below, which is going to give you the global configuration plus the ReadWtiteSplit Router configured as the service to be configured:
#: This is the basic configuration file we can get in place to start maxscale.
#: Notice that we need yet to come back soon to this file so we can add the service
#: encrypted user password (attention to security matters, no clear text passwords, please)
[maxscale]
threads = auto
log_augmentation = 1
ms_timestamp = 1
[rwsplit-service]
type = service
router = readwritesplit
user = maxusr
password = <add your encrypted maxusr password here>
version_string = 5.5.50-MariaDB #:should be used with all 10.1 servers and older
As we need SOP or Standard Operational Procedures for everything we perform, the documentation I have for setting up MaxScale considers to always have two users:
- A service user: no matter how many services/routers you’re running on a MaxScale instance, you need to have at least one user set for the service. The defined user for service is the maxusr (yes, without the “e”, I didn’t forget that). Once you defined that user, you also need to create it on backends so the MaxScale Router/Service can connect to backends and forward queries. In this specific scenario, as we’re speaking about the ReadWriteSplit Router, writes will be sent to the master and reads will be sent to the slaves. You would like to check how the ReadWriteSplit Router Routing Decisions work so you can better design your applications;
- A monitor user: monitors are modules that monitor the backends, and depending on what you’re running, you will use one monitor or another. If you running a replication cluster, regular GTID Master/Slaves replication, you want to use the MariaDBMon which is going to give you the automatic operations such as failover/rejoin and the manual possibility to perform a switchover;
- A replication user: as we’re considering a replication cluster, or a yet simple master/slave scenario, we need to have a user so MaxScale can configure replication on database servers when needed. It happens when we configure MaxScale to execute a failover in case the master should crash, we run a manual switchover or yet, a rejoin needs to be executed when the old master comes back to the cluster now as a new slave/replica. If you don’t create a replication user when configuring the MariaDBMon, be aware that the user for your replication will be the one you defined to run the monitor per se, I personally don’t like that (don’t be lazy, 😉 ).
Let’s assume you have a simple Master/Slave already running, like below:
#: master/slave topology MariaDB MaxScale Servers -------------------------------------------------------- 1. prod_maxscale01 (10.136.87.62/24 - Mode: Active) MariaDB Servers Backends Cluster -------------------------------------------------------- 2. prod_mariadb01 (10.136.88.50/24 - master) 3. \__ prod_mariadb02 (10.136.69.104/24 - slave/replica) 4. \__ prod_mariadb03 (10.136.79.28/24 - slave/replica)
As a best practice, always configure the @@global.report_host on all database servers with their names:
prod-mariadb01 [(none)]> show slave hosts; +-----------+----------------+------+-----------+ | Server_id | Host | Port | Master_id | +-----------+----------------+------+-----------+ | 3 | prod_mariadb03 | 3306 | 1 | | 2 | prod_mariadb02 | 3306 | 1 | +-----------+----------------+------+-----------+ 2 rows in set (0.000 sec
Assuming the above MariaDB Servers Backends Cluster already have replication up and running (most of you have an environment like this one), you can just think about how we can add a MaxScale server in the middle of applications and your database servers. Most of the time I’m going to refer to database servers as backends as per the regular terminology we use after adding a Load Balancer to a database topology.
Doing a quick recap on where we are, we need now to create the users on the master, so, we can see users replicating to slaves and have the same data all around. Also, it’s good to have the @@global.gtid_strict_mode set on all the servers so we can keep the binary log files the same on all the servers (MaxScale also likes that).
Below we are creating the users as mentioned before, considering the backends we’re working with:
#: maxscale service user CREATE USER 'maxusr'@'10.136.%' IDENTIFIED BY '123'; GRANT SELECT ON mysql.user TO 'maxusr'@'10.136.%'; GRANT SELECT ON mysql.db TO 'maxusr'@'10.136.%'; GRANT SELECT ON mysql.tables_priv TO 'maxusr'@'10.136.%'; GRANT SHOW DATABASES ON *.* TO 'maxusr'@'10.136.%'; GRANT SELECT ON mysql.roles_mapping TO maxusr@'10.136.%'; #: maxscale monitor user CREATE USER 'maxmon'@'10.136.%' IDENTIFIED BY '321'; GRANT RELOAD, SUPER, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'maxmon'@'10.136.%'; GRANT CREATE, SELECT, UPDATE, INSERT, DELETE ON maxscale_schema.* TO 'maxmon'@'10.136.%'; #: replication users - make sure the below user can connect #: from all backends to all backends CREATE USER mariadb@'10.136.%' IDENTIFIED BY '123'; GRANT RELOAD, REPLICATION SLAVE ON *.* TO mariadb@'10.136.%';
One additional detail here is that if you’re running the MariaDB Server 10.3.4 or you upgraded from an older version to a newer one, like MariaDB Server 10.3.5, as the user maxmon has the SUPER privilege, the DELETE HISTORY privilege will also be added to the list of GRANTS due to the fact that user should also be able to delete data from the System Versioned Tables.
After creating the above users, we need to go the prod_maxscale01 – 10.136.87.62 – as we need to set up the MariaDB Official Repository and setup MaxScale packages:
#: setting up the repository root@prod-maxscale01:~# curl -sS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash [warning] Found existing file at /etc/apt/sources.list.d/mariadb.list. Moving to /etc/apt/sources.list.d/mariadb.list.old_1. [info] Repository file successfully written to /etc/apt/sources.list.d/mariadb.list [info] Adding trusted package signing keys... Executing: /tmp/apt-key-gpghome.sDa0MNg3Md/gpg.1.sh --keyserver hkp://keys.gnupg.net:80 --recv-keys 0x8167EE24 0xE3C94F49 0xcbcb082a1bb943db 0xf1656f24c74cd1d8 0x135659e928c12247 gpg: key 135659E928C12247: "MariaDB Maxscale <maxscale@googlegroups.com>" not changed gpg: key F1656F24C74CD1D8: 4 signatures not checked due to missing keys gpg: key F1656F24C74CD1D8: "MariaDB Signing Key <signing-key@mariadb.org>" not changed gpg: key CBCB082A1BB943DB: 32 signatures not checked due to missing keys gpg: key CBCB082A1BB943DB: "MariaDB Package Signing Key <package-signing-key@mariadb.org>" not changed gpg: key CE1A3DD5E3C94F49: 3 signatures not checked due to missing keys gpg: key CE1A3DD5E3C94F49: "MariaDB Enterprise Signing Key <signing-key@mariadb.com>" not changed gpg: key 70E4618A8167EE24: "MariaDBManager" not changed gpg: Total number processed: 5 gpg: unchanged: 5 Hit:1 http://security.debian.org stretch/updates InRelease Ign:2 http://mirrors.digitalocean.com/debian stretch InRelease Hit:3 https://repos.insights.digitalocean.com/apt/do-agent main InRelease Get:4 http://mirrors.digitalocean.com/debian stretch-updates InRelease [91.0 kB] Hit:5 http://downloads.mariadb.com/MariaDB/mariadb-10.4/repo/debian stretch InRelease Hit:6 http://mirrors.digitalocean.com/debian stretch Release Ign:7 http://downloads.mariadb.com/MaxScale/2.3/debian stretch InRelease Hit:8 http://downloads.mariadb.com/Tools/debian stretch InRelease Hit:10 http://downloads.mariadb.com/MaxScale/2.3/debian stretch Release Hit:9 https://packagecloud.io/akopytov/sysbench/debian stretch InRelease Fetched 91.0 kB in 0s (106 kB/s) Reading package lists... Done [info] Successfully added trusted package signing keys. #: setting up packages root@prod-maxscale01:~# apt install maxscale maxscale-experimental mariadb-client -y Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: maxscale maxscale-experimental 0 upgraded, 2 newly installed, 0 to remove and 5 not upgraded. Need to get 167 kB/31.1 MB of archives. After this operation, 143 MB of additional disk space will be used. Get:1 http://downloads.mariadb.com/MaxScale/2.3/debian stretch/main amd64 maxscale-experimental amd64 2.3.11 [167 kB] Fetched 167 kB in 0s (337 kB/s) Selecting previously unselected package maxscale. (Reading database ... 35953 files and directories currently installed.) Preparing to unpack .../maxscale_2.3.11_amd64.deb ... Unpacking maxscale (2.3.11) ... Selecting previously unselected package maxscale-experimental. Preparing to unpack .../maxscale-experimental_2.3.11_amd64.deb ... Unpacking maxscale-experimental (2.3.11) ... Setting up maxscale (2.3.11) ... Setting up maxscale-experimental (2.3.11) ...
Why am I setting up the mariadb-client package also? We need to test access from the MaxScale host to the backends so we can make sure MaxScale configured with the users we created will also be able to access backends. To catch permission or access denied when we execute queries is really bad as we need to recap on everything we did for the set up and it can take some time to review everything. We definitely don’t want that. Test access and move on.
Now, let’s create the .secrets and get an encryption version of users passwords:
#: create the .secrets file root@prod_maxscale01:~# maxkeys Generating .secrets file in /var/lib/maxscale. #: generate the encrypted password for maxusr - this is the service user #: you are going to need the below-encrypted password for the next question root@prod-maxscale01:~# maxpasswd 123 #: maxusr A0FE98035CFA5EB978337B739E949878 #: generate the encrypted password for maxmon - this is the monitor user #: you are going to need the below-encrypted password on next labs root@prod-maxscale01:~# maxpasswd 321 #: maxmon AFB909850E7181E9906159CE45176FAD #: generate the encrypted password for the mariadb replication user root@prod-maxscale01:~# maxpasswd 123 #: mariadb A0FE98035CFA5EB978337B739E949878 #: adjust permissions for the .secrets file root@prod-maxscale01:~# chown maxscale:maxscale /var/lib/maxscale/.secrets
With encrypted passwords, we can create a basic configuration file. The below is your /etc/maxscale.cnf:
[maxscale] threads = auto log_augmentation = 1 ms_timestamp = 1 admin_host = 0.0.0.0 admin_port = 8989 [rwsplit-service] type = service router = readwritesplit user = maxusr password = A0FE98035CFA5EB978337B739E949878
Check if the maxscale.service is enabled so it can start with the OS boot and start it. Check the log file afterward:
#: starting up the maxscale.service root@prod-maxscale01:~# systemctl --now enable maxscale.service root@prod-maxscale01:~# tail -n30 /var/log/maxscale/maxscale.log MariaDB MaxScale /var/log/maxscale/maxscale.log Mon Aug 5 12:25:54 2019 ---------------------------------------------------------------------------- 2019-08-05 12:25:54 notice : (mxb_log_set_syslog_enabled): syslog logging is enabled. 2019-08-05 12:25:54 notice : (mxb_log_set_maxlog_enabled): maxlog logging is enabled. 2019-08-05 12:25:54.078 notice : (mxb_log_set_highprecision_enabled): highprecision logging is enabled. 2019-08-05 12:25:54.078 notice : (config_load_global): Using up to 976.56KiB of memory for query classifier cache 2019-08-05 12:25:54.079 notice : (change_cwd): Working directory: /var/log/maxscale 2019-08-05 12:25:54.079 notice : (init_sqlite3): The collection of SQLite memory allocation statistics turned off. 2019-08-05 12:25:54.079 notice : (init_sqlite3): Threading mode of SQLite set to Multi-thread. 2019-08-05 12:25:54.080 notice : (main): MariaDB MaxScale 2.3.11 started (Commit: 36355922281a6820de63b76fb76c9203861e3988) 2019-08-05 12:25:54.080 notice : (main): MaxScale is running in process 13166 2019-08-05 12:25:54.080 notice : (main): Configuration file: /etc/maxscale.cnf 2019-08-05 12:25:54.080 notice : (main): Log directory: /var/log/maxscale 2019-08-05 12:25:54.081 notice : (main): Data directory: /var/lib/maxscale 2019-08-05 12:25:54.081 notice : (main): Module directory: /usr/lib/x86_64-linux-gnu/maxscale 2019-08-05 12:25:54.081 notice : (main): Service cache: /var/cache/maxscale 2019-08-05 12:25:54.082 notice : (load_module): Loaded module qc_sqlite: V1.0.0 from /usr/lib/x86_64-linux-gnu/maxscale/libqc_sqlite.so 2019-08-05 12:25:54.082 notice : (qc_setup): Query classification results are cached and reused. Memory used per thread: 976.56KiB 2019-08-05 12:25:54.083 notice : (init): The systemd watchdog is Enabled. Internal timeout = 30s 2019-08-05 12:25:54.083 notice : (config_load_single_file): Loading /etc/maxscale.cnf. 2019-08-05 12:25:54.084 notice : (is_directory): /etc/maxscale.cnf.d does not exist, not reading. 2019-08-05 12:25:54.084 notice : (mxs_get_module_object): Initializing statement-based read/write split router module. 2019-08-05 12:25:54.085 notice : (load_module): Loaded module readwritesplit: V1.1.0 from /usr/lib/x86_64-linux-gnu/maxscale/libreadwritesplit.so 2019-08-05 12:25:54.085 notice : (qc_sqlite_process_init): Statements that cannot be parsed completely are logged. 2019-08-05 12:25:54.086 notice : (service_launch_all): Starting a total of 1 services... 2019-08-05 12:25:54.086 warning: (serviceStartAllPorts): Service 'rwsplit-service' has no listeners defined. 2019-08-05 12:25:54.086 notice : (service_launch_all): Service 'rwsplit-service' started (1/1) 2019-08-05 12:25:54.086 notice : (main): Started REST API on [0.0.0.0]:8989 2019-08-05 12:25:54.086 notice : (main): MaxScale started with 1 worker threads, each with a stack size of 8388608 bytes. 2019-08-05 12:25:54.090 notice : (hkthread): Housekeeper thread started.
Now you have MaxScale up and running! It’s time for testing the maxusr and maxmon connectivity with backends:
#: service user access test root@prod-maxscale01:~# mysqladmin -u maxusr -p123 -h 10.136.88.50 ping mysqld is alive root@prod-maxscale01:~# mysqladmin -u maxusr -p123 -h 10.136.69.104 ping mysqld is alive root@prod-maxscale01:~# mysqladmin -u maxusr -p123 -h 10.136.79.28 ping mysqld is alive #: monitor user access test root@prod-maxscale01:~# mysqladmin -u maxmon -p321 -h 10.136.88.50 ping mysqld is alive root@prod-maxscale01:~# mysqladmin -u maxmon -p321 -h 10.136.69.104 ping mysqld is alive root@prod-maxscale01:~# mysqladmin -u maxmon -p321 -h 10.136.79.28 ping mysqld is alive
A final test to make sure everything is really set is to test is you can access all databases from all databases. I don’t wanna go over the 9! test here, but, it’s good you go over it and make sure the replication user can access all from all, as replication will be set by MaxScale and you don’t want to have access denied reported by the IO_THREAD for a new slave configured by MaxScale.
At this point, we have MaxScale running with a basic configuration file and also, basic settings for the ReadWriteSplit Router, which is the only service we have running on MaxScale currently. You can notice that we have configurations on the global [maxscale] section for making it possible to remote access MaxScale using the MaxCtrl. Here I’m not considering MaxAdmin as it’s deprecated on MaxScale 2.3 and will be removed on MaxScale 2.4; this latter, currently in beta.
Currently, you can use MaxCtrl to retrieve basic information like below:
#: maxscale global configurations root@prod-maxscale01:~# maxctrl show maxscale ┌──────────────┬──────────────────────────────────────────────────────────────────────┐ │ Version │ 2.3.11 │ ├──────────────┼──────────────────────────────────────────────────────────────────────┤ │ Commit │ 36355922281a6820de63b76fb76c9203861e3988 │ ├──────────────┼──────────────────────────────────────────────────────────────────────┤ │ Started At │ Mon, 05 Aug 2019 12:25:54 GMT │ ├──────────────┼──────────────────────────────────────────────────────────────────────┤ │ Activated At │ Mon, 05 Aug 2019 12:25:54 GMT │ ├──────────────┼──────────────────────────────────────────────────────────────────────┤ │ Uptime │ 17863 │ ├──────────────┼──────────────────────────────────────────────────────────────────────┤ │ Parameters │ { │ │ │ "libdir": "/usr/lib/x86_64-linux-gnu/maxscale", │ │ │ "datadir": "/var/lib/maxscale", │ │ │ "process_datadir": "/var/lib/maxscale/data13166", │ [...snip...] │ │ "admin_auth": true, │ │ │ "admin_enabled": true, │ │ │ "admin_log_auth_failures": true, │ │ │ "admin_host": "0.0.0.0", │ │ │ "admin_port": 8989, │ │ │ "admin_ssl_key": "", │ │ │ "admin_ssl_cert": "", │ │ │ "admin_ssl_ca_cert": "", │ │ │ "passive": false, │ [...snip...] │ │ "load_persisted_configs": true │ │ │ } │ └──────────────┴──────────────────────────────────────────────────────────────────────┘
There are lots os commands to retrieve information from MaxScale using the MaxCtrl in communication to the REST API, running now listening on MaxScale host on any interface, as per the `admin_host` variable and on port 8989, as the what is defined at `admin_port`. As we tested the communication with the MaxScale, we can use now MaxCtrl to create the needed objects like a monitor and a listener, add the servers and link them to the monitor and the service. Since MaxScale 2.1 we have a way to do this with Dynamic Commands; the results of the commands will be to create objects and persist them in files created at the –persistdir. Additionally, a journal file will be also kept at the MaxScale –datadir so it can keep track of the current status of the backends in case the maxscale.service is restarted.
Let’s create the cluster, follow the below commands so we can do that:
#: ReadWriteSplit setup Using Dynamic Commands #: Created by Wagner Bianchi <bianchi@mariadb.com> #: task: creating the monitor maxctrl create monitor replication-monitor mariadbmon --monitor-user=maxmon --monitor-password=AFB909850E7181E9906159CE45176FAD replication_user=mariadb replication_password=A0FE98035CFA5EB978337B739E949878 #: task: configuring the monitor for the replication cluster maxctrl alter monitor replication-monitor monitor_interval 1000 maxctrl alter monitor replication-monitor failcount 3 maxctrl alter monitor replication-monitor auto_failover true maxctrl alter monitor replication-monitor auto_rejoin true maxctrl alter monitor replication-monitor enforce_read_only_slaves true #: task: create a listener maxctrl create listener rwsplit-service replication-rwsplit-listener 3306 #: task: create servers maxctrl create server prod_mariadb01 10.136.88.50 3306 maxctrl create server prod_mariadb02 10.136.69.104 3306 maxctrl create server prod_mariadb03 10.136.79.28 3306 #: task: link servers with the service maxctrl link service rwsplit-service prod_mariadb01 maxctrl link service rwsplit-service prod_mariadb02 maxctrl link service rwsplit-service prod_mariadb03 #: task: link servers with the monitor maxctrl link monitor replication-monitor prod_mariadb01 maxctrl link monitor replication-monitor prod_mariadb02 maxctrl link monitor replication-monitor prod_mariadb03
If you executed the above commands while tailing the maxscale.log, you would see many interesting things. At this point, it’s ready to go:
root@prod-maxscale01:~# maxctrl list servers ┌────────────────┬───────────────┬──────┬─────────────┬─────────────────┬──────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ ├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────────┤ │ prod_mariadb03 │ 10.136.79.28 │ 3306 │ 0 │ Slave, Running │ 0-1-3 │ ├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────────┤ │ prod_mariadb02 │ 10.136.69.104 │ 3306 │ 0 │ Slave, Running │ 0-1-3 │ ├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────────┤ │ prod_mariadb01 │ 10.136.88.50 │ 3306 │ 0 │ Master, Running │ 0-1-3 │ └────────────────┴───────────────┴──────┴─────────────┴─────────────────┴──────────────┘
To conclude this blog post, we can do a final test, a sysbench on maxscale server:
root@prod-maxscale01:~# mysql -u bianchi -p123 -h 10.136.87.62 -e "create database maxscaledb" -vvv -------------- create database maxscaledb -------------- Query OK, 1 row affected (0.002 sec) Bye root@prod-maxscale01:~# sysbench --test=/usr/share/sysbench/oltp_read_write.lua --table_size=10000 --mysql-db=maxscaledb --tables=20 --mysql-user=bianchi --mysql-password=123 --mysql-port=3306 --mysql-host=10.136.87.62 --db-driver=mysql --threads=32 --events=0 --time=60 --rand-type=uniform --report-interval=1 prepare sysbench 1.0.17 (using bundled LuaJIT 2.1.0-beta2) Initializing worker threads... Creating table 'sbtest15'... Creating table 'sbtest17'... Creating table 'sbtest16'... Creating table 'sbtest14'... [...snip...] Creating a secondary index on 'sbtest19'... Creating a secondary index on 'sbtest20'... Creating a secondary index on 'sbtest16'... Creating a secondary index on 'sbtest11'... Creating a secondary index on 'sbtest14'... root@prod-maxscale01:~# sysbench --test=/usr/share/sysbench/oltp_read_write.lua --table_size=10000 --mysql-db=maxscaledb --tables=20 --mysql-user=bianchi --mysql-password=123 --mysql-port=3306 --mysql-host=10.136.87.62 --db-driver=mysql --threads=32 --events=0 --time=60 --rand-type=uniform --report-interval=1 run & [1] 15656 root@prod-maxscale01:~# maxctrl list servers ┌────────────────┬───────────────┬──────┬─────────────┬─────────────────┬──────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ ├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────┤ │ prod_mariadb03 │ 10.136.79.28 │ 3306 │ 32 │ Slave, Running │ 0-1-8144 │ ├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────┤ │ prod_mariadb01 │ 10.136.88.50 │ 3306 │ 32 │ Master, Running │ 0-1-8144 │ ├────────────────┼───────────────┼──────┼─────────────┼─────────────────┼──────────┤ │ prod_mariadb02 │ 10.136.69.104 │ 3306 │ 32 │ Slave, Running │ 0-1-8144 │ └────────────────┴───────────────┴──────┴─────────────┴─────────────────┴──────────┘
Next blog, I will carry on with this same environment as per what was defined here, I will test failover, switchover, and rejoin.
You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
Leave a Reply