Home
08
Jan
2018
ProxySQL Firewalling
Written by Marco Tusa   

ProxySQL_firewall_smallNot long ago we had an internal discussion about security and how to enforce a stricter set of rules to prevent malicious acts, and block other undesired queries.

ProxySQL comes up as a possible tool that could help us in achieving what we were looking for. Last year I had written how to use ProxySQL to stop a single query.

 

That approach may be good for few queries and as temporary solution. But what can we do when we really want to use ProxySQL as an SQL-based firewall? And more importantly, how to do it right?

 

First of all, let us define what “right” can be in this context.

For right I mean an approach that will allow us to have rules matching as specific as possible, and impacting the production system as least as possible.

To make this clearer, let us assume I have 3 schemas:

Shakila

World

Windmills

 

I want to have my firewall block/allow SQL access independently by each schema, user, eventually by source, and so on.

There are a few case where this is not realistic, like in SaaS setups where each schema represents a customer. In this case, the application will have exactly the same kind of SQL just pointing to different schemas depending the customer.

Using ProxySQL

Anyhow… ProxySQL allows you to manage query firewalling in a very simple and efficient way using the query rules.

In the mysql_query_rules table we can define a lot of important things and one of this is, to set our SQL firewall.

 

How?

Let us take a look to the mysql_query_rules table:

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
rule_id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL,
    active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 0,
    username VARCHAR,
    schemaname VARCHAR,
    flagIN INT NOT NULL DEFAULT 0,
    client_addr VARCHAR,
    proxy_addr VARCHAR,
    proxy_port INT,
    digest VARCHAR,
    match_digest VARCHAR,
    match_pattern VARCHAR,
    negate_match_pattern INT CHECK (negate_match_pattern IN (0,1)) NOT NULL DEFAULT 0,
    re_modifiers VARCHAR DEFAULT 'CASELESS',
    flagOUT INT,
    replace_pattern VARCHAR,
    destination_hostgroup INT DEFAULT NULL,
    cache_ttl INT CHECK(cache_ttl > 0),
    reconnect INT CHECK (reconnect IN (0,1)) DEFAULT NULL,
    timeout INT UNSIGNED,
    retries INT CHECK (retries>=0 AND retries <=1000),
    delay INT UNSIGNED,
    next_query_flagIN INT UNSIGNED,
    mirror_flagOUT INT UNSIGNED,
    mirror_hostgroup INT UNSIGNED,
    error_msg VARCHAR,
    OK_msg VARCHAR,
    sticky_conn INT CHECK (sticky_conn IN (0,1)),
    multiplex INT CHECK (multiplex IN (0,1,2)),
    log INT CHECK (log IN (0,1)),
    apply INT CHECK(apply IN (0,1)) NOT NULL DEFAULT 0,
    comment VARCHAR)
 

 

We can define rules around almost everything: connection source, port, destination IP/Port, user, schema, SQL text, and any combination of them.

 

Given we may have a quite large set of queries to manage, I prefer to logically create “areas” around which add the rules to manage SQL access.

For instance, I may decide to allow a specific set of SELECTs to my schema windmills but nothing more.

Given that, I allocate the set of rule ids from 100 to 1100 to my schema, and add my rules in 3 groups.

  1. The exception that will bypass the firewall
  2. The blocking rule(s) [the firewall]
  3. The managing rules (post processing like sharding and so on)

There is a simple thing to keep in mind when you design rules for firewalling.
Do you need post processing of the query or not?
In the case you DON’T need post processing, the rule can simply apply and exit the QueryProcessor. That is probably the most common scenario and read/write split can be define in the exception rules assigning to the rule the desired HostGroup.

While if you need post-processing the rule MUST have apply=0 and the FLAGOUT must be define. That will allow you to have additional actions once the query is beyond the firewall.

An example is in case of sharding, where you need to process the sharding key/comment or whatever.

 

I will use the simple Firewall scenario, given this is the topic of the current article.

The rules

Let us start with the easy one, set 2, the blocking rule:

1
2
3
insert into mysql_query_rules (rule_id,username,schemaname,match_digest,error_msg,active,apply) 
values(1000,'pxc_test','windmills','.',
'You cannot pass.....I am a servant of the Secret Fire, wielder of the flame of Anor,. You cannot pass.',1, 1);

 

In this query rule, I had defined the following:

  • User connecting
  • Schema name
  • Any query
  • Message to report
  • Rule_id 

That rule will block ANY query that will try to access the schema windmills from application user pxc_test.

 

Now in set 1, I will add all the rules I want let pass, will report here one only but all can be found in github here.

1
insert into mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,
schemaname,active,retries,apply,flagout,match_digest)
values(101,6033,'pxc_test',52,'windmills',1,3,1,1000,
'SELECT wmillAUTOINC\.id,wmillAUTOINC\.millid,wmillAUTOINC\.location
FROM wmillAUTOINC WHERE wmillAUTOINC\.millid=.* and wmillAUTOINC\.active=.*'
);

 

That is quite simple and straightforward but there is an important element that you must note.
In this rule, apply must have value of =1 always, to allow the query rule to bypass without further delay the firewall.

(Side Note:  if you need post-processing, the flagout needs to have a value (like flagout=1000) and apply must be =0. That will allow the query to jump to the set 3, the managing rules.)

 

This is it, ProxySQL will go to the managing rules as soon as it finds a matching rule that will allow the application to access my database/schema, or it will exit if apply=1.

A graph can help to understand better:

Screen Shot 2018-01-03 at 1.37.04 AM

 

 

Rule set 3 will have the standard query rules to manage what to do with the incoming connection like sharding or redirecting SELECT FOR UPDATE and so on:

1
insert into mysql_query_rules (rule_id,proxy_port,schemaname,username,
destination_hostgroup,active,retries,match_digest,apply,flagin)
values(1040,6033,'windmills','pxc_test',50,1,3,'^SELECT.*FOR UPDATE',1,1000);

 

Please note the presence of the flagin which matches the flagout above.

 

Setting rules, sometimes thousands of them can be very confusing. It is very important to plan correctly what should be in as excluding rule and what not. Do not rush, take your time and identify the queries you need to manage carefully.

 

Once more proxySQL can help us. Querying the table stats_mysql_query_digest will tell us exactly what queries were sent to ProxySQL, ie:

1
(admin@127.0.0.1) [main]>select hostgroup,schemaname,digest,digest_text,count_star 
from stats_mysql_query_digest where schemaname='windmills' order by count_star desc;

 

 

The above query shows us all the queries hitting the windmills schema. From there we can decide which queries we want to pass and which not.

1
2
3
4
5
6
7
8
9
 
>select hostgroup,schemaname,digest,digest_text,count_star 
from stats_mysql_query_digest where schemaname='windmills' order by count_star desc limit 1\G
  *************************** 1. row *************************** hostgroup: 50 schemaname: windmills digest: 0x18CA8FF2C9C53276 digest_text: SHOW GLOBAL STATUS count_star: 141

 

Once we have our set done (check on github for an example), we are ready to check how our firewall works.


By default, I suggest you to keep all the exceptions (in set 1) with active=0, just to test the firewall.

 

For instance, my application will generate the following exception:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
You cannot pass.....I am a servant of the Secret Fire, wielder of the flame of Anor,. You cannot pass.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:411) at com.mysql.jdbc.Util.getInstance(Util.java:386) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2809) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2758) at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1612) at net.tc.stresstool.statistics.providers.MySQLStatus.getStatus(MySQLStatus.java:48) at net.tc.stresstool.statistics.providers.MySQLSuper.collectStatistics(MySQLSuper.java:92) at net.tc.stresstool.statistics.StatCollector.collectStatistics(StatCollector.java:258) at net.tc.stresstool.StressTool.<init>(StressTool.java:198) at net.tc.stresstool.StressTool.main(StressTool.java:282)  

 

Activating the rules, will instead allow your application to work as usual.

 

What the impact will be?

First,let us define the baseline, running the application without any rule blocking but only the r/w split (set 3).

 

Queries/sec global

queries_routed_baseline

Using two application servers:

Server A: Total Execution time = 213

Server B: Total Execution time = 209

 

 

Queries/sec per server

queries_routed_baseline_per_server

As we can see queries are almost equally distributed.

 

QueryProcessor time taken/Query processed total

QP_cost_baseline

 

All queries are processed by QueryProcessor in ~148ms AVG (total)

QueryProcessor efficiency per query.

QP_efficency_baseline

 

Single query cost is in nanoseconds avg 10 us.

 

Use match_digest

Once defined the baseline we can go ahead and activate all the rules using the match_digest.
Run the same tests again and… :

 

Queries/sec global

queries_routed_match

 

Using two application servers:

Server A: Total Execution time = 207

Server B: Total Execution time = 204

 

First of all, we can notice that the execution time did not increase. This is mainly because we have CPU cycles to use in the ProxySQL box.

 

Queries/sec per server

queries_routed_match_per_server

 

 

Here we have a bit of unbalance. We will investigate that in a separate session, but all in all, time/effort sounds ok.

 

QueryProcessor time taken/Query processed total

QP_cost_match

Here we have the first thing to notice. Comparing this to the baseline we had defined, we can see that using the rules as match_digest had significantly increase the execution time to 458ms.

 

QueryProcessor efficiency per query.

QP_efficency_match

 

Notice that also if we are in the range of nanoseconds, the cost of processing the query is now 3 times that of the baseline. Not too much but if you add a stone to another stone and another stone and another stone … you end up building a huge wall.

 

So, what to do? Up to now we had seen that managing the firewall with ProxySQL is easy and it can be set at very detailed level, but the cost may not be what we expect it to be.

 

What can be done? Use DIGEST instead.

The secret is to not use match_digest, which implies interpretation of the string, but to use the DIGEST of the query, which is calculated ahead and remains constant for that query.

 

Let us see what will be if we run the same load using DIGEST in the MYSQL_QUERY_RULES table.

 

Queries/sec global

queries_routed_digest

Using two application servers:

Server A: Total Execution time = 213

Server B: Total Execution time = 209

No, this is  not an issue with cut and paste. I had more or less the same execution time as if without rules, at the seconds; different millisecond though.

 

Queries/sec per server

queries_routed_digest_per_server

Again, here some unbalance, but minor thing.

 

QueryProcessor time taken/Query processed total

QP_cost_digest

 

And we go down as we should to 61ms as execution of all queries. Note that we improve the efficiency of the Query Processor from 148ms AVG to 61ms AVG.

Why? Because our rules using the DIGEST also have the instructions for read/write split, so request can exit the Query Processor with all the information required at this stage; more efficient.

 

QueryProcessor efficiency per query.

QP_efficency_digest

Finally using the DIGEST, the cost for query drops to 4us which is … LOW!

 

That’s it! ProxySQL using the DIGEST field from mysql_query_rules, will perform much better, given that it will not need to analyze the whole SQL string with regular expression, but it will just match the DIGEST.

Conclusions

ProxySQL can be effectively used as an SQL firewall, but some best practices should be taken in to consideration.

First of all, try to use specific rules, and be specific on what should be filtered/allowed. Use filter by schema or user or IP/port or combination of them.

Always try to avoid match_digest and use digest instead. That will allow ProxySQL to bypass the call to the regularExp lib and it will be by far more efficient.

Use stats_mysql_query_digest to identify the correct DIGEST.

 

Regarding this, it would be nice to have an GUI interface that will allow us to manage these rules; that would make the usage of the ProxySQL much easier, and the maintenance/creation of rule_chains friendlier.

Last Updated on Tuesday, 09 January 2018 11:34
 
21
Nov
2017
ProxySQL PXC Single Writer Mode and auto failover, re-bootstrap
Written by Marco Tusa   

Overview

ProxySQL had been adopted as solution for HA in place of HAProxy in Percona PXC package. writing
The new solution has a lot of advantages and provide an unbelievable flexibility we did not had before.  But when talking about HA and PXC there was still a gap.

As already discussed in my previous article “ProxySQL and Percona XtraDB Cluster (Galera) Integration”(https://www.percona.com/blog/2016/09/15/proxysql-percona-cluster-galera-integration/),
ProxySQL is working great when using multi-master approach, but trying to have it in single Node writer, was not possible unless using creative (nice way to say wrong) solutions like what I was covering in “ProxySQL and PXC using Replication HostGroup” in the same article.

In the following months, I had few emails and requests from colleagues and externals reporting me that some time that specific solution was not working or was not covering this or that case.  No matter if I had mark the related feature active_failover as experimental, apparently that in conjunction of Replication HG was deployed as solution.

What was wrong with that? Well first of all that was only a test, an experimental feature, as such I was not going to maintain it or fix bugs. But secondly and more important, it was based on a wrong concept.  ProxySQL is on-top-of MySQL as reverse proxy. The specific feature was breaking the main idea and was actually acting on the PXC nodes changing their status (READ_ONLY).  That should not happen and any action or operation should involve only the ProxySQL node(s)

 

 

 

 

Why this blog

This blog will describe what I had implemented in the script as solution (not experimental), for the case in which we need to use PXC in Single Writer Mode (SWM). I will illustrate the new feature in PXC and the additional concept of Failover Host Group (FHG).

After few months, I realized I have to change the script and either remove the experimental feature or redesign it.  But something was missed in PXC to allow me (us) to correctly enable the SWM.  Actually at the beginning I was looking to have it fully integrated in ProxySQL, and I had several discussion with Percona colleagues, and Rene’, about this.  But at the end the PXC new internal features had not be enough to guarantee ProxySQL to be able to identify and manage most dangerous cases. Let us see what had being implemented in PXC and what was still missed such that I had to put my hands on the script code again.

Good .. but not good enough  When in the need to have PXC in SWM, we need to have ONE server in the HG receiving the WRITES, and as many as we like in the HG(s) receiving the READS.

What this imply is that ProxySQL either need to perform this operation using Replication HG (abusing of the READ_ONY flag), or we need a way to:

  • Identify which node(s) can be a WRITER candidate in place of
  • Identify/assign the correct relevance and priority to each candidate
  • Have a unique view to recognize the nodes in the cluster and their segment membership Identify as much as possible any reason(s) for which a node cannot be used as writer

What we were discussing internally, was to have a solution that would allow us to see the cluster status no matter which node would answer us, and should also be able to provide us which node could become the next writer. This last should be eventually evaluated against custom settings/preferences.

PXC 5.7.19 come with a new feature pxc_cluster_view, a table that reside in performance_schema and that can be easily access simply doing :

 

(pxc_test@192.168.1.11) [(none)]>select * FROM performance_schema.pxc_cluster_view ORDER BY SEGMENT,LOCAL_INDEX;
+-----------+--------------------------------------+--------+-------------+---------+
| HOST_NAME | UUID                                 | STATUS | LOCAL_INDEX | SEGMENT |
+-----------+--------------------------------------+--------+-------------+---------+
| node1     | 05b5554f-be34-11e7-aa92-5e038abbae35 | SYNCED |           0 |       1 |
| node2     | 63870cc3-af5d-11e7-a0db-463be8426737 | SYNCED |           1 |       1 |
| node3     | 666ad5a0-af5d-11e7-9b39-2726e5de8eb1 | SYNCED |           2 |       1 |
| node6     | 7540bdca-b267-11e7-bae9-464c3d263470 | SYNCED |           3 |       2 |
| node5     | ab551483-b267-11e7-a1a1-9be1826f877f | SYNCED |           4 |       2 |
| node4     | e24ebd01-af5d-11e7-86f0-42c8ceb6886c | SYNCED |           5 |       2 |
+-----------+--------------------------------------+--------+-------------+---------+

 

 

As you can see the table is reporting us the list of ALL nodes ordered by Segment and LOCAL_INDEX. Why I had use that sort? Because this order will give me, in descending order, the priority for which PXC see the nodes.  As we know segment is assigned and is part of the customization/design we have to take care when deploying the cluster.  Local_Index, instead, is an auto calculated value against the UUID assigned internally by the cluster.  As such while this is a good indicator internally, is totally useless for us in case we want to “customize” the priority.
But this is not all, as already covered in my previously mentioned article, PXC/Galera nodes can be in a state that will prevent the proper access to them, without having ProxySQL realizing it … or PXC itself .

One very simple example is the following:

(root@localhost) [(none)]>set global wsrep_reject_queries=ALL; 
2017-11-06T19:21:14.533407Z 738828 [Note] WSREP: Rejecting client queries due to manual setting

 

(pxc_test@192.168.1.231) [(none)]>select now();select * FROM performance_schema.pxc_cluster_view ORDER BY SEGMENT,LOCAL_INDEX;
| 2017-11-06 14:22:19 |
| HOST_NAME | UUID                                 | STATUS | LOCAL_INDEX | SEGMENT |
| node1     | 05b5554f-be34-11e7-aa92-5e038abbae35 | SYNCED |           0 |       1 | ← still looks good
| node2     | 63870cc3-af5d-11e7-a0db-463be8426737 | SYNCED |           1 |       1 |
| node3     | 666ad5a0-af5d-11e7-9b39-2726e5de8eb1 | SYNCED |           2 |       1 |
| node6     | 7540bdca-b267-11e7-bae9-464c3d263470 | SYNCED |           3 |       2 |
| node5     | ab551483-b267-11e7-a1a1-9be1826f877f | SYNCED |           4 |       2 |
| node4     | e24ebd01-af5d-11e7-86f0-42c8ceb6886c | SYNCED |           5 |       2 |


No need to say that by default the Vanilla ProxySQL version will not catch that either, so Bingo! Or to be more correct BOOOM!

Your cluster is doomed, because by mistake or on purpose something not traced had stop your production Write node.
So what else/more do we need, to have not only an healty PXC cluster but also the chance to perform a nice failover when using SWM?

  • Well of course the script covering the different node state, and that my script was already doing it.
  • We need to have the chance to indicate a list of preferred node, with priority. After we can decide to go for the priority as it comes from pxc_cluster_view if for any reasons our choice will not work. Or even go for a different segment as last resource but still respecting the pxc_cluster_view.

To fix the first and second points, I had first look to what had been done in ProxySQL for group replication. Group Replication has a dedicated table with a lot of additional information:

  • writer_hostgroup
  • backup_writer_hostgroup
  • reader_hostgroup
  • offline_hostgroup
  • active
  • max_writers
  • writer_is_also_reader
  • Max_transactions_behind

This could make sense for Group replication, also if I consider it a bit redundant, but for us and PXC, that would be too much and no needed given I can resolve to have what I need in a more elegant way.

Keep in mind that when Rene’ had implement the HostGroup approach he had put the foundations of an unbelievable flexibility with elegance and consistency.
Everything in ProxySQL revolve around HG, we do not have servers (well we do as part of) we have HG, our rules point to HG, queries are directed to HG servers are incidentally mention as part of HG.
So why not take the benefit of this? Afterall I had already used the special group 9000 to manage edge cases of maintenance and offline.

The only problem was, do I have all the information I need there? Let us see the content of mysql_servers table:

  • hostgroup_id
  • hostname
  • port
  • status
  • weight
  • compression
  • max_connections
  • max_replication_lag
  • use_ssl
  • max_latency_ms
  • comment

To generate a list of candidates WRITER for PXC, I need exactly that nothing less or more.
The new mysql_servers table will looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
SELECT hostgroup_id,hostname,port,STATUS,weight FROM runtime_mysql_servers ORDER BY hostgroup_id,weight DESC;
+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 50           | 192.168.1.205 | 3306 | ONLINE | 1000000 |<-- writer
| 52           | 192.168.1.21  | 3306 | ONLINE | 1000000 |
| 52           | 192.168.1.233 | 3306 | ONLINE | 10      |
| 52           | 192.168.1.205 | 3306 | ONLINE | 1       |
| 52           | 192.168.1.22  | 3306 | ONLINE | 1       |
| 52           | 192.168.1.23  | 3306 | ONLINE | 1       |
| 52           | 192.168.1.231 | 3306 | ONLINE | 1       |
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |<-- Candidates with weight
| 8050         | 192.168.1.231 | 3306 | ONLINE | 10000   |<--
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |<--
+--------------+---------------+------+--------+---------+
 

Now that we had established we have what we need, let us test, the solution.

letusdoit

Implement and test SWM and HA with galera_check.pl
As previously discuss the new script can efficiently manage the SWM and HA, using 3 different methods of failover:

  1. Use special HG8000 to set preferences
  2. Use pxc_cluster_view inside same segment
  3. Last resource pxc_cluster_view in ANY segment

What we will test is:

  • Controlled failover for maintenance
  • Server cannot get queries
  • Server crash

Failover methods:

  • Use provided list of servers
  • Use pxc_cluster_view for same segment
  • Use pxc_cluster_view for all segment

We will have:

  • 6 nodes
  • 2 segments
  • 2 HG (with read/write split)

We will test it as we did for https://www.percona.com/blog/2016/09/15/proxysql-percona-cluster-galera-integration/ with Sysbench commands:

 

Prepare
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.11 --mysql-port=6033 --mysql-user=pxc_test --mysql-password=test --mysql-db=test_galera --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=50000 --num-threads=10 --report-interval=10  prepare
 
Write
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.11 --mysql-port=6033 --mysql-user=pxc_test --mysql-password=test --mysql-db=test_galera --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=50000 --max-requests=0 --max-time=9000 --oltp-point-selects=5 --oltp-read-only=off --oltp-dist-type=uniform --oltp-reconnect-mode=transaction --oltp-skip-trx=off --num-threads=10 --report-interval=10 --mysql-ignore-errors=all run
 
Read only
sysbench --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-host=192.168.1.11 --mysql-port=6033 --mysql-user=pxc_test --mysql-password=test --mysql-db=test_galera --db-driver=mysql --oltp-tables-count=50 --oltp-tablesize=50000 --max-requests=0 --max-time=9000 --oltp-point-selects=5 --oltp-read-only=on --num-threads=10 --oltp-reconnect-mode=query --oltp-skip-trx=on --report-interval=10 --mysql-ignore-errors=all run

 

Set it up:

 

UPDATE global_variables SET Variable_Value=0 WHERE Variable_name='mysql-hostgroup_manager_verbose';  

 

 

Please take a moment to note the above setting.
IF you omit ProxySQL will verbosely report host-groups changes, I suggest to put it to 0 unless you are trying to debug “something” or your logs will become enormous soon.

 

 

INSERT INTO mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent) VALUES ('pxc_test','test',1,50,'mysql',1);
LOAD MYSQL USERS TO RUNTIME;SAVE MYSQL USERS TO DISK;

 

 

 

Below the entries to populate add your servers:

 

DELETE FROM mysql_servers WHERE hostgroup_id IN (50,52);
 
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.21',50,3306,1000000,2000);
 
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.205',52,3306,1,2000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.21',52,3306,1000000,2000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.231',52,3306,1,2000);
 
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.22',52,3306,1,2000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.23',52,3306,1,2000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.233',52,3306,10,2000);

 

Here the section with the list of WRITER candidates:

 

INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.205',8050,3306,1000000,2000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.231',8050,3306,100000,2000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.22',8050,3306,100000,2000);
 
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;

 

 

Finally query rules to have read/write split:

 

DELETE FROM mysql_query_rules WHERE rule_id IN (40,41,45,46,80,81);
INSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(4,6033,'pxc_test',50,1,3,'.',1);
INSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(40,6033,'pxc_test',50,1,3,'^SELECT.*FOR UPDATE',1);
INSERT INTO mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) VALUES(41,6033,'pxc_test',52,1,3,'^SELECT ',1); 
 
LOAD MYSQL QUERY RULES TO RUNTIME;SAVE MYSQL QUERY RULES TO DISK;

 

 

 

Time to test our SWM and failover.

Clone the script from git in your preferred directory, I will put it in opt for now.

git clone https://github.com/Tusamarco/proxy_sql_tools
chmod +x  /opt/proxy_sql_tools/galera_check.pl


Add to the script to the scheduler

 

DELETE FROM scheduler WHERE id=10;
INSERT  INTO scheduler (id,active,interval_ms,filename,arg1) VALUES (10,0,2000,"/opt/proxy_sql_tools/galera_check.pl","-u=admin -p=admin -h=127.0.0.1 -H=50:W,52:R -P=6032 --execution_time=0 --retry_down=0 --retry_up=0 --main_segment=1 --debug=0  --log=/var/lib/proxysql/galeraLog --active_failover=1");
LOAD SCHEDULER TO RUNTIME;SAVE SCHEDULER TO DISK;

 

 

Make it active:

UPDATE scheduler SET active=1 WHERE id=10; LOAD scheduler TO run;

 

If you had left the --execution_time=1 the script will constantly print the time it takes to execute in the log:

2017/11/10 09:03:44.439:[INFO] END EXECUTION Total Time(ms):391

I suggest you to initially keep it on so you will see what is actually happening.

 

Time to perform the first test.

Controlled failover for maintenance

redflagWhen you want to do a CONTROLLED failover, the best way to do with minimal impact is to manually add the second node with very low weight, and after remove the node you need to work on. I will show you how to do this safely.

Add the new new write node to the writer HG:

 

INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.205',50,3306,1,2000);
LOAD mysql servers TO run;

You will have something like this:

 

+---------+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| weight  | hostgroup | srv_host      | srv_port | STATUS | ConnUsed | ConnFree | ConnOK | ConnERR | Queries | Bytes_data_sent | Bytes_data_recv | Latency_us |
+---------+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+
| 1000000 | 50        | 192.168.1.21  | 3306     | ONLINE | 10       | 0        | 35863  | 0       | 1137903 | 75147779        | 1527773047     | 6794       |
| 1       | 50        | 192.168.1.205 | 3306     | ONLINE | 0        | 0        | 0      | 0       | 0       | 0               | 0              | 8236       |
| 1000    | 52        | 192.168.1.233 | 3306     | ONLINE | 8        | 1        | 33153  | 0       | 1747722 | 82574272        | 3919114554     | 9304       |
| 1000    | 52        | 192.168.1.231 | 3306     | ONLINE | 3        | 6        | 15206  | 0       | 1469312 | 69597188        | 3340444198     | 2989       |
| 1000    | 52        | 192.168.1.23  | 3306     | ONLINE | 6        | 3        | 36690  | 4       | 2046599 | 96797464        | 4606494587     | 10261      |
| 1000    | 52        | 192.168.1.22  | 3306     | ONLINE | 4        | 5        | 36778  | 262     | 2152217 | 101807337       | 4846638759      | 2108       |
| 10      | 52        | 192.168.1.21  | 3306     | ONLINE | 0        | 1        | 302    | 0       | 21960   | 1039596         | 49257671       | 6794       |
| 1000    | 52        | 192.168.1.205 | 3306     | ONLINE | 3        | 6        | 31355  | 0       | 2180310 | 103198644       | 4917528180      | 8236       |
+---------+-----------+---------------+----------+--------+----------+----------+--------+---------+---------+-----------------+-----------------+------------+

 

 

 

With the added host handling very few connections because the weight.

At this point you just need to connect to the node you want to put in maintenance and set it desync:

SET global wsrep_desync=1;

 

 

Once desync the script will put the server in OFFLINE_SOFT for you, and the server will allow the running connection to complete while the new ones will go to the node we had just insert.
No errors no service interruption at all:

 

2017/11/10 09:31:41.967:[WARN]  Move node:192.168.1.21;3306;50;3010 SQL: UPDATE mysql_servers SET status='OFFLINE_SOFT' WHERE hostgroup_id=50 AND hostname='192.168.1.21' AND port='3306'
2017/11/10 09:31:41.978:[WARN]  Move node:192.168.1.21;3306;52;3010 SQL: UPDATE mysql_servers SET status='OFFLINE_SOFT' WHERE hostgroup_id=52 AND hostname='192.168.1.21' AND port='3306'

 

 

Once maintenance is done, just reverse the wsrep_desync and the script will put the node back to the pool.
redflagIf you need to stop the mysql server on that node and you do not want it automatically back, remember to remove the entry from the host group 50 (the Writer one) or it will be automatically placed back in production as active.

 

DELETE FROM mysql_servers WHERE hostname=”192.168.1.21” AND port=3306 AND hostgroup_id=50;
LOAD mysql servers TO run;

Remember at the and to remove the temporary node from the Writer HG.

 

Real failover

Now a real fail over and we will test it using wsrep_rejectqueries to trigger the issue, but that can be any other issue.

In this test, given there is no other node accepting the writes in the HG, we should expect some connections error, but what we want is to have the failover in very short time.
Keep in mind we have define these as possible failover candidates:

 

+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE | 100000  |
+--------------+---------------+------+--------+---------+
 
While the pxc_cluster_view sees the following:
 
+-----------+--------------------------------------+--------+-------------+---------+
| HOST_NAME | UUID                                 | STATUS | LOCAL_INDEX | SEGMENT |
+-----------+--------------------------------------+--------+-------------+---------+
| node1     | 05b5554f-be34-11e7-aa92-5e038abbae35 | SYNCED |           0 |       1 |
| node2     | 63870cc3-af5d-11e7-a0db-463be8426737 | SYNCED |           2 |       1 |
| node3     | 666ad5a0-af5d-11e7-9b39-2726e5de8eb1 | SYNCED |           3 |       1 |
| node5     | 07f55a00-c57a-11e7-bd7d-8a568ca96345 | SYNCED |           1 |       2 |
| node4     | 6c185b87-c57a-11e7-b2a6-8f095c523cd3 | SYNCED |           4 |       2 |
| node6     | 7540bdca-b267-11e7-bae9-464c3d263470 | SYNCED |           5 |       2 |
+-----------+--------------------------------------+--------+-------------+---------+
 
FOR better understanding also FOR the following tests this IS the mapping, node name <> IP:
+-----------+---------------+
| HOST_NAME | IP            |
+-----------+---------------+
| node1     | 192.168.1.205 | 
| node2     | 192.168.1.21  | 
| node3     | 192.168.1.231 | 
| node5     | 192.168.1.23  | 
| node4     | 192.168.1.22  | 
| node6     | 192.168.1.233 | 
+-----------+---------------+

 

My Servers layout is:

 
+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 50           | 192.168.1.205 | 3306 | ONLINE | 1000000 | ← writer
| 52           | 192.168.1.205 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.22  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.23  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.231 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.233 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.21  | 3306 | ONLINE | 10      |
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE | 100000  |
+--------------+---------------+------+--------+---------+

 

The first candidate is

192.168.1.22

which is node4 and stay in a DIFFERENT segment than the current writer. As such the first election should go to the other node

192.168.1.231

which is Node3 and reside in the same segment.
Let us see what happens with the active-failover=1:
Do:

set global wsrep_reject_queries=all;

On the current writer.

The script will identify the Writer cannot accept queries and will take action:

2017/11/13 10:51:52.856:[WARN]  Move node:192.168.1.205;3306;50;3001 SQL: UPDATE mysql_servers SET hostgroup_id=9050 WHERE hostgroup_id=50 AND hostname='192.168.1.205' AND port='3306'
2017/11/13 10:51:52.861:[WARN]  Move node:192.168.1.205;3306;52;3001 SQL: UPDATE mysql_servers SET hostgroup_id=9052 WHERE hostgroup_id=52 AND hostname='192.168.1.205' AND port='3306'
2017/11/13 10:51:55.038:[WARN] Fail-over in action Using Method = 1
2017/11/13 10:51:55.038:[INFO] Special Backup - Group found! I am electing a node to writer following the indications
 This Node Try to become the new WRITER for HG 50 Server details: 192.168.1.231:3306:HG8050
2017/11/13 10:51:55.038:[INFO] This Node Try to become a WRITER promoting to HG 50192.168.1.231:3306:HG8050
2017/11/13 10:51:55.083:[WARN]  Move node:192.168.1.231:33061000002000 SQL:INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.231',50,3306,100000,2000);

 

It will also analyze the failover candidates and it will promote one node if present. Also as expected it will choose the one in the same segment.

What happen at the applications?

Writes
[ 2010s ] thds: 10 tps: 33.40 qps: 503.33 (r/w/o: 302.92/127.31/73.10) lat (ms,95%): 434.83 err/s: 0.00 reconn/s: 0.00
[ 2020s ] thds: 10 tps: 38.30 qps: 575.27 (r/w/o: 344.68/147.59/82.99) lat (ms,95%): 376.49 err/s: 0.00 reconn/s: 0.00
[ 2030s ] thds: 10 tps: 35.80 qps: 537.20 (r/w/o: 322.40/137.90/76.90) lat (ms,95%): 419.45 err/s: 0.00 reconn/s: 0.00
[ 2040s ] thds: 10 tps: 32.70 qps: 489.34 (r/w/o: 293.53/125.71/70.11) lat (ms,95%): 427.07 err/s: 0.00 reconn/s: 0.00
[ 2050s ] thds: 10 tps: 30.60 qps: 456.43 (r/w/o: 273.62/117.61/65.20) lat (ms,95%): 493.24 err/s: 0.00 reconn/s: 0.00
[ 2060s ] thds: 10 tps: 19.20 qps: 298.36 (r/w/o: 181.18/74.99/42.19) lat (ms,95%): 657.93 err/s: 0.00 reconn/s: 30.50 <-- here 
[ 2070s ] thds: 10 tps: 23.40 qps: 345.80 (r/w/o: 206.30/89.10/50.40) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 0.00
[ 2080s ] thds: 10 tps: 27.20 qps: 414.93 (r/w/o: 249.72/106.71/58.50) lat (ms,95%): 612.21 err/s: 0.00 reconn/s: 0.00
 
Reads
[ 2000s ] thds: 30 tps: 256.50 qps: 2309.91 (r/w/o: 2309.91/0.00/0.00) lat (ms,95%): 189.93 err/s: 0.00 reconn/s: 0.00
[ 2010s ] thds: 30 tps: 250.27 qps: 2254.26 (r/w/o: 2254.26/0.00/0.00) lat (ms,95%): 200.47 err/s: 0.00 reconn/s: 0.00
[ 2020s ] thds: 30 tps: 256.29 qps: 2305.30 (r/w/o: 2305.30/0.00/0.00) lat (ms,95%): 186.54 err/s: 0.00 reconn/s: 0.00
[ 2030s ] thds: 30 tps: 241.65 qps: 2176.28 (r/w/o: 2176.28/0.00/0.00) lat (ms,95%): 193.38 err/s: 0.00 reconn/s: 0.00
[ 2040s ] thds: 30 tps: 217.88 qps: 1956.96 (r/w/o: 1956.96/0.00/0.00) lat (ms,95%): 215.44 err/s: 0.00 reconn/s: 0.00
[ 2050s ] thds: 30 tps: 277.70 qps: 2504.09 (r/w/o: 2504.09/0.00/0.00) lat (ms,95%): 196.89 err/s: 0.00 reconn/s: 0.00
[ 2060s ] thds: 30 tps: 207.37 qps: 1863.52 (r/w/o: 1863.52/0.00/0.00) lat (ms,95%): 240.02 err/s: 0.00 reconn/s: 0.00
[ 2070s ] thds: 30 tps: 199.44 qps: 1795.70 (r/w/o: 1795.70/0.00/0.00) lat (ms,95%): 244.38 err/s: 0.00 reconn/s: 0.00
 

The moment of the server lock the application had a hiccup for the write as expected, but the time was very limited and the difference in WRITE transactions was only of 75-89/sec  against the normally processed ~120/sec, as such only ~50 missed during the incident.Reads 0 loss.

 

What if there is no other server in same segment?

My new servers layout is:

 

+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 50           | 192.168.1.231 | 3306 | ONLINE | 100000  | <--- new 
| 52           | 192.168.1.22  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.23  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.231 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.233 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.21  | 3306 | ONLINE | 10      |
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE | 100000  |
| 9052         | 192.168.1.205 | 3306 | ONLINE | 1000    | <--- set in special group
+--------------+---------------+------+--------+---------+

 

 

Let us put down the writer again. 

At this point the only available node is the one on the other segment. 
As expected the script will identify the failed node and will take action:

 

2017/11/13 12:15:18.535:[WARN]  Move node:192.168.1.231;3306;50;3001 SQL: UPDATE mysql_servers SET hostgroup_id=9050 WHERE hostgroup_id=50 AND hostname='192.168.1.231' AND port='3306'
2017/11/13 12:15:18.543:[WARN]  Move node:192.168.1.231;3306;52;3001 SQL: UPDATE mysql_servers SET hostgroup_id=9052 WHERE hostgroup_id=52 AND hostname='192.168.1.231' AND port='3306'
2017/11/13 12:15:20.646:[WARN] Fail-over in action Using Method = 1
2017/11/13 12:15:20.646:[INFO] Special Backup - Group found! I am electing a node to writer following the indications
 This Node Try to become the new WRITER for HG 50 Server details: 192.168.1.22:3306:HG8050
2017/11/13 12:15:20.646:[INFO] This Node Try to become a WRITER promoting to HG 50192.168.1.22:3306:HG8050
2017/11/13 12:15:20.660:[WARN]  Move node:192.168.1.22:33061000002000 SQL:INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.22',50,3306,100000,2000);

 

This because when using automatic_failover=1 I assume YOU KNOW what you are doing in using a failover group. As such while I still use the segments for internal priority, I am not filtering out the node(s) in a different segment, as if using automatic_failover=2.

Same minimal impact on TPS:

 

Writes
[ 7040s ] thds: 10 tps: 26.39 qps: 395.02 (r/w/o: 236.85/101.48/56.69) lat (ms,95%): 569.67 err/s: 0.00 reconn/s: 0.00
[ 7050s ] thds: 10 tps: 26.81 qps: 405.20 (r/w/o: 243.16/104.73/57.31) lat (ms,95%): 623.33 err/s: 0.00 reconn/s: 0.00
[ 7060s ] thds: 10 tps: 18.80 qps: 276.65 (r/w/o: 167.27/68.29/41.09) lat (ms,95%): 634.66 err/s: 0.00 reconn/s: 23.70
[ 7070s ] thds: 10 tps: 18.90 qps: 292.87 (r/w/o: 177.78/73.99/41.10) lat (ms,95%): 5813.24 err/s: 0.00 reconn/s: 2.80
[ 7080s ] thds: 10 tps: 24.00 qps: 356.84 (r/w/o: 213.42/91.71/51.71) lat (ms,95%): 623.33 err/s: 0.00 reconn/s: 0.00
[ 7090s ] thds: 10 tps: 20.60 qps: 312.27 (r/w/o: 187.58/79.69/45.00) lat (ms,95%): 759.88 err/s: 0.00 reconn/s: 0.00
[ 7100s ] thds: 10 tps: 19.80 qps: 299.33 (r/w/o: 179.02/78.41/41.90) lat (ms,95%): 787.74 err/s: 0.00 reconn/s: 0.00
[ 7110s ] thds: 10 tps: 26.70 qps: 398.33 (r/w/o: 239.12/101.51/57.70) lat (ms,95%): 669.89 err/s: 0.00 reconn/s: 0.00</pre>
 
Reads
[ 7040s ] thds: 30 tps: 207.90 qps: 1870.43 (r/w/o: 1870.43/0.00/0.00) lat (ms,95%): 227.40 err/s: 0.00 reconn/s: 0.00
[ 7050s ] thds: 30 tps: 198.88 qps: 1790.44 (r/w/o: 1790.44/0.00/0.00) lat (ms,95%): 240.02 err/s: 0.00 reconn/s: 0.00
[ 7060s ] thds: 30 tps: 165.52 qps: 1488.05 (r/w/o: 1488.05/0.00/0.00) lat (ms,95%): 282.25 err/s: 0.00 reconn/s: 0.00
[ 7070s ] thds: 30 tps: 186.87 qps: 1681.46 (r/w/o: 1681.46/0.00/0.00) lat (ms,95%): 267.41 err/s: 0.00 reconn/s: 0.00
[ 7080s ] thds: 30 tps: 178.24 qps: 1607.05 (r/w/o: 1607.05/0.00/0.00) lat (ms,95%): 267.41 err/s: 0.00 reconn/s: 0.00
[ 7090s ] thds: 30 tps: 167.67 qps: 1506.13 (r/w/o: 1506.13/0.00/0.00) lat (ms,95%): 282.25 err/s: 0.00 reconn/s: 0.00
[ 7100s ] thds: 30 tps: 170.50 qps: 1535.70 (r/w/o: 1535.70/0.00/0.00) lat (ms,95%): 287.38 err/s: 0.00 reconn/s: 0.00
[ 7110s ] thds: 30 tps: 175.52 qps: 1580.38 (r/w/o: 1580.38/0.00/0.00) lat (ms,95%): 267.41 err/s: 0.00 reconn/s: 0.00
 
Servers layout
+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 50           | 192.168.1.22  | 3306 | ONLINE | 100000  |<-- new writer
| 52           | 192.168.1.22  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.23  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.233 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.21  | 3306 | ONLINE | 10      |
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE | 100000  |
| 9052         | 192.168.1.205 | 3306 | ONLINE | 1000    |<-- node in special hg
| 9052         | 192.168.1.231 | 3306 | ONLINE | 1000    |<-- “
+--------------+---------------+------+--------+---------+
 

 

At this point as security feature also if you put back the failed nodes, the script WILL NOT fail back. But I am working on this feature request to implement th eoption to eventually perform fail-back.
As for today if you do:

Set global wsrep_reject_queries=none 

On both servers, and you will have:

 

+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 50           | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 52           | 192.168.1.205 | 3306 | ONLINE | 1000    |<-- back but READ
| 52           | 192.168.1.22  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.23  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.231 | 3306 | ONLINE | 1000    |<-- back but READ 
| 52           | 192.168.1.233 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.21  | 3306 | ONLINE | 10      |
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE | 100000  |
+--------------+---------------+------+--------+---------+

 

 

To fail back you need to manually trigger it.
Connect to the current writer and set it to wsrep_reject_queries = ALL
The script will move back as Writer the node with Higher weight in the 8000 HG

 

>+--------------+---------------+------+--------+---------+
| hostgroup_id | hostname      | port | STATUS | weight  |
+--------------+---------------+------+--------+---------+
| 50           | 192.168.1.205 | 3306 | ONLINE | 1000000 |<-- back as writer
| 52           | 192.168.1.205 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.23  | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.231 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.233 | 3306 | ONLINE | 1000    |
| 52           | 192.168.1.21  | 3306 | ONLINE | 10      |
| 8050         | 192.168.1.205 | 3306 | ONLINE | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE | 100000  |
| 9052         | 192.168.1.22  | 3306 | ONLINE | 1000    |<-- set as special group
+--------------+---------------+------+--------+---------+

 

Server crash

In case of server crash the behaviour will be exactly the same with the difference that the node will be elected immediately, no intermediate step for the HG shift.
Killing the mysqld with kill -9 (SIGKILL) will result in the following:

 

writes
[ 240s ] thds: 10 tps: 42.80 qps: 642.78 (r/w/o: 385.99/166.00/90.80) lat (ms,95%): 331.91 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 10 tps: 40.80 qps: 603.37 (r/w/o: 361.74/155.92/85.71) lat (ms,95%): 363.18 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 10 tps: 39.60 qps: 600.79 (r/w/o: 361.20/156.20/83.40) lat (ms,95%): 376.49 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 10 tps: 35.00 qps: 541.33 (r/w/o: 324.82/141.91/74.60) lat (ms,95%): 356.70 err/s: 0.00 reconn/s: 1.00<--
[ 280s ] thds: 10 tps: 14.80 qps: 216.47 (r/w/o: 130.78/54.89/30.80) lat (ms,95%): 7615.89 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 10 tps: 23.10 qps: 347.64 (r/w/o: 208.42/91.21/48.00) lat (ms,95%): 682.06 err/s: 0.00 reconn/s: 0.00</pre><br>
 
reads
[ 230s ] thds: 30 tps: 253.27 qps: 2281.51 (r/w/o: 2281.51/0.00/0.00) lat (ms,95%): 193.38 err/s: 0.00 reconn/s: 0.00
[ 240s ] thds: 30 tps: 229.80 qps: 2066.20 (r/w/o: 2066.20/0.00/0.00) lat (ms,95%): 215.44 err/s: 0.00 reconn/s: 0.00
[ 250s ] thds: 30 tps: 217.51 qps: 1957.59 (r/w/o: 1957.59/0.00/0.00) lat (ms,95%): 240.02 err/s: 0.00 reconn/s: 0.00
[ 260s ] thds: 30 tps: 207.89 qps: 1872.62 (r/w/o: 1872.62/0.00/0.00) lat (ms,95%): 240.02 err/s: 0.00 reconn/s: 0.00
[ 270s ] thds: 30 tps: 200.18 qps: 1802.03 (r/w/o: 1802.03/0.00/0.00) lat (ms,95%): 253.35 err/s: 0.00 reconn/s: 0.00<--
[ 280s ] thds: 30 tps: 199.33 qps: 1791.80 (r/w/o: 1791.80/0.00/0.00) lat (ms,95%): 240.02 err/s: 0.00 reconn/s: 0.00
[ 290s ] thds: 30 tps: 187.49 qps: 1689.78 (r/w/o: 1689.78/0.00/0.00) lat (ms,95%): 257.95 err/s: 0.00 reconn/s: 0.00
 
Script Log:
2017/11/16 10:02:10.923:[ERROR] Cannot connect TO DBI:mysql:host=192.168.1.205;port=3306 AS monitor
2017/11/16 10:02:11.179:[WARN] Fail-over IN action USING Method = 1
2017/11/16 10:02:11.179:[INFO] Special Backup - GROUP found! I am electing a node TO writer following the indications
 This Node Try TO become the new WRITER FOR HG 50 Server details: 192.168.1.231:3306:HG8050
2017/11/16 10:02:11.179:[INFO] This Node Try TO become a WRITER promoting TO HG 50 192.168.1.231:3306:HG 8050
2017/11/16 10:02:11.234:[WARN]  Move node:192.168.1.231:33061000002000 SQL:INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections) VALUES ('192.168.1.231',50,3306,100000,2000);
2017/11/16 10:02:13.190:[ERROR] Cannot connect TO DBI:mysql:host=192.168.1.205;port=3306 AS monitor
 
Server layout:
(admin@127.0.0.1) [main]>select hostgroup_id,hostname,port,STATUS,weight FROM runtime_mysql_servers ORDER BY hostgroup_id,weight DESC;
+--------------+---------------+------+---------+---------+
| hostgroup_id | hostname      | port | STATUS  | weight  |
+--------------+---------------+------+---------+---------+
| 50           | 192.168.1.231 | 3306 | ONLINE  | 100000  |<-- new writer
| 52           | 192.168.1.205 | 3306 | SHUNNED | 1000    |<--shunned server
| 52           | 192.168.1.22  | 3306 | ONLINE  | 1000    |
| 52           | 192.168.1.23  | 3306 | ONLINE  | 1000    |
| 52           | 192.168.1.231 | 3306 | ONLINE  | 1000    |
| 52           | 192.168.1.233 | 3306 | ONLINE  | 1000    |
| 52           | 192.168.1.21  | 3306 | ONLINE  | 10      |
| 8050         | 192.168.1.205 | 3306 | SHUNNED | 1000000 |
| 8050         | 192.168.1.22  | 3306 | ONLINE  | 100000  |
| 8050         | 192.168.1.231 | 3306 | ONLINE  | 100000  |
+--------------+---------------+------+---------+---------+

 

Using Auto_failover=2|3

This is going to be exactly the same behaviour not matter what is the failover method choose.
The most flexible and controlled one is to use the 8000 HG with the identification of the list of servers prioritize by weight.

The use of pxc_cluster_view, will give you an pseudo automatic way to set the same priority.
But choosing that method will NOT guarantee you that the node you want will take the traffic given is based the internal Galera decision.

Conclusions

While the galera_check script was initially conceptualize to work with multiple writers, the inclusion of pxc_cluste_view and the addition of the special Host Group 8000, had provide the needed base for me to develop a sustainable feature to support single writer and failover.

I have to remember you that the preferred solution is and remain to use multiple writers with different weights, given that solution provide internal failover in Proxysql.
The use of active_failover can significantly reduce the amount of service interruption caused by a possible single writer node crash, still it is an external script and is not a effective as internal balancing base on weight.

By design the script at the moment is NOT supporting the failback, but I already have a feature request on this.
Once implemented this feature will allow a node that had being removed from the active pool, to recover his state once in a healthy state again.
Another very important aspect is the way ProxySQL will manage the configuration cross node using the cluster option.

This will be described in the next article.

More blogs (shorter) will come on how to use the script and ProxySQL Cluster.

Reference

https://github.com/Tusamarco/proxy_sql_tools
https://www.percona.com/blog/2016/09/15/proxysql-percona-cluster-galera-integration/

Last Updated on Tuesday, 21 November 2017 12:48
 
More Articles...
«StartPrev12345678910NextEnd»

Page 1 of 25
 

Connecting from

Your IP: 54.227.104.40

Location: UNITED STATES

Who's Online

We have 59 guests online

Latest/Most Comments

no comments