Sidebar

Main Menu Mobile

  • Home
  • Blog(s)
    • Marco's Blog
  • Technical Tips
    • MySQL
      • Store Procedure
      • Performance and tuning
      • Architecture and design
      • NDB Cluster
      • NDB Connectors
      • Perl Scripts
      • MySQL not on feed
    • Applications ...
    • Windows System
    • DRBD
    • How To ...
  • Never Forget
    • Environment
TusaCentral
  • Home
  • Blog(s)
    • Marco's Blog
  • Technical Tips
    • MySQL
      • Store Procedure
      • Performance and tuning
      • Architecture and design
      • NDB Cluster
      • NDB Connectors
      • Perl Scripts
      • MySQL not on feed
    • Applications ...
    • Windows System
    • DRBD
    • How To ...
  • Never Forget
    • Environment

ProxySQL support for MySQL caching_sha2_password

Details
Marco Tusa
MySQL
03 November 2022

In our time, every day we use dozens if not hundreds of applications connecting to some kind of data repository. This simple step is normally executed over the network and given so, it is subject to possible sniffing with all the possible related consequences.brokenlock

Given that it is normally better to protect your connection using data encryption like SSL, or at the minimum, make the information you pass to connect less easy to be intercepted. 

At the same time it is best practice to not store connection credential in clear text, not even inside a table in your database. Doing that is the equivalent of writing your password over a sticky note on your desk. Not a good idea.

The main options are instead in either transforming the passwords to be less identifiable like hashing or to store the information in an external centralized vault. 

In MySQL the passwords are transformed in order to not be clear text, and several different plugins are used to authenticate the user. From version 8 MySQL uses caching_sha2_password as default authentication plugin. The caching_sha2_password and sha256_password authentication plugins provide more secure password encryption than the mysql_native_password plugin, and caching_sha2_password provides better performance than sha256_password. Due to these superior security and performance characteristics of caching_sha2_password, it is as of MySQL 8.0 the preferred authentication plugin, and is also the default authentication plugin rather than mysql_native_password.

In this regard recently I got the same question again “Can we use ProxySQL with MySQL 8 authorization mechanism?”, and I decided it was time to write this short blog post.

The short answer is “Yes you can”, however do not expect to have full caching_sha2_password support.

This is because ProxySQL does not fully support the caching_sha2_password mechanism internally and given that a “trick” must be used. 

So, what should we do when using MySQL 8 and ProxySQL? 

In the text below we will see what can be done to continue to use ProxySQL with MySQL and Percona server 8. 

Note that I have used the Percona proxysql_admin tool to manage the users except in the last case.
Percona
proxysql_admin tool is a nice tool that helps you to manage ProxySQL and in regard to user it also manage and synchronize users from your Percona or MySQL  

In the following examples:

Proxysql is on 192.168.4.191

User name/password is msandbox/msandbox

Using hashing.

By default MySQL comes with caching_sha2_password as such if I create a user names msandbox I will have:

DC1-1(root@localhost) [(none)]>select user,host, authentication_string,plugin from mysql.user order by 1,2;
+----------------------------+--------------------+------------------------------------------------------------------------+-----------------------+
| user                       | host               | authentication_string                                                  | plugin                |
+----------------------------+--------------------+------------------------------------------------------------------------+-----------------------+
| msandbox                   | %                  | $A$005$Z[z@l'O%[Q5t^ EKJDgxjWXJjDpDEUv91oL7Hoh/0NydTeCzpV.aI06C9.      | caching_sha2_password |      <---- this user     
+----------------------------+--------------------+------------------------------------------------------------------------+-----------------------+ 

Then I use percona_scheduler_admin to sync the users:

./percona-scheduler-admin --config-file=config.toml --syncusers 
Syncing user accounts from PXC(192.168.4.205:3306) to ProxySQL
Removing existing user from ProxySQL: msandbox
Adding user to ProxySQL: msandbox

Synced PXC users to the ProxySQL database!

mysql> select * from mysql_users ;
+------------+------------------------------------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+-----------------------------+
| username   | password                                                               | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | attributes | comment                     |
+------------+------------------------------------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+-----------------------------+
| msandbox   | $A$005$Z[z@l'O%[Q5t^ EKJDgxjWXJjDpDEUv91oL7Hoh/0NydTeCzpV.aI06C9       | 1      | 0       | 100               | NULL           | 0             | 1                      | 0            | 1       | 1        | 10000           |            |                             |
+------------+------------------------------------------------------------------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+-----------------------------+

And set the query rules:

insert into mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) values(1048,6033,'msandbox',100,1,3,'^SELECT.*FOR UPDATE',1);
insert into mysql_query_rules (rule_id,proxy_port,username,destination_hostgroup,active,retries,match_digest,apply) values(1050,6033,'msandbox',101,1,3,'^SELECT.*$',1);

load mysql query rules to run;save mysql query rules to disk;

Now I try to connect passing by ProxySQL:

# mysql -h 192.168.4.191 -P6033  -umsandbox -pmsandbox
ERROR 1045 (28000): ProxySQL Error: Access denied for user 'msandbox'@'192.168.4.191' (using password: YES)

My account will fail to connect given failed authentication.

To fix this I need to drop the user and recreate it with a different authentication plugin in my MySQL server:

drop user msandbox@'%';
create user 'msandbox'@'%' identified with mysql_native_password  BY 'msandbox';
grant select on *.* to 'msandbox'@'%';

select user,host, authentication_string,plugin from mysql.user order by 1,2;
+----------+--------------------+-------------------------------------------+-----------------------+
| user     | host               | authentication_string                     | plugin                |
+----------+--------------------+-------------------------------------------+-----------------------+
| msandbox | %                  | *6C387FC3893DBA1E3BA155E74754DA6682D04747 | mysql_native_password |
+----------+--------------------+-------------------------------------------+-----------------------+

At this point I can re-run

./percona-scheduler-admin --config-file=config.toml --syncusers

if I try to connect again:

# mysql -h 192.168.4.191 -P6033  -umsandbox -pmsandbox
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 6708563
Server version: 8.0.28 (ProxySQL). <---------------------------- Connecting to proxysql

Copyright (c) 2000, 2022, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show global variables like 'version%';
+-------------------------+------------------------------------------------------------------------------------+
| Variable_name           | Value                                                                              |
+-------------------------+------------------------------------------------------------------------------------+
| version                 | 8.0.25-15.1         <--- Percona/MySQL version                                     |
| version_comment         | Percona XtraDB Cluster binary (GPL) 8.0.25, Revision 8638bb0, WSREP version 26.4.3 |
| version_compile_machine | x86_64                                                                             |
| version_compile_os      | Linux                                                                              |
| version_compile_zlib    | 1.2.11                                                                             |
| version_suffix          | .1                                                                                 |
+-------------------------+------------------------------------------------------------------------------------+
6 rows in set (0.02 sec)

This is the only way to keep the password hashed in MySQL and in ProxySQL.

Not using Hashing

What if you cannot use mysql_native_password for the password in your MySQL server?

There is a way to still connect, however I do not recommend it given for me is highly insecure, but for completeness I am going to illustrate it.

First of all disable password hashing in Proxysql:

update global_variables set Variable_Value='false' where Variable_name='admin-hash_passwords'; 

At this point instead sync the user you can locally create the user like:

insert into mysql_users (username,password,active,default_hostgroup,default_schema,transaction_persistent,comment) values ('msandbox','msandbox',1,100,'mysql',1,'generic test for security'); 
mysql> select * from runtime_mysql_users where username ='msandbox'; 
+----------+----------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------------------------+ 
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | attributes | comment                   | 
+----------+----------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------------------------+ 
| msandbox | msandbox | 1      | 0       | 100               | mysql          | 0             | 1                      | 0            | 1       | 0        | 10000           |            | generic test for security | 
| msandbox | msandbox | 1      | 0       | 100               | mysql          | 0             | 1                      | 0            | 0       | 1        | 10000           |            | generic test for security | 
+----------+----------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------------------------+

As you can see doing that will prevent the password to be hashed and instead it will be clear text.

At this point you will be able to connect to MySQL 8 using the caching_sha2_password, but the password is visible in ProxySQL.

Let me repeat, I DO NOT recommend using it this way, because for me it is highly insecure. 

 

Conclusion

While it is still possible to configure your user in MySQL to connect using ProxySQL, it is obvious that we have a gap in the way ProxySQL supports security. 

The hope is that this gap will be filled soon by the ProxySQL development team, also if looking to the past issues this seems pending from years now. 

References

https://proxysql.com/documentation/mysql-8-0/

https://github.com/sysown/proxysql/issues/2580

https://www.percona.com/blog/upgrade-your-libraries-authentication-plugin-caching_sha2_password-cannot-be-loaded/

Zero impact on index creation with Aurora 3

Details
Marco Tusa
MySQL
20 April 2022

aurora ddl notesLast quarter of 2021 AWS released Aurora version 3. This new version aligns Aurora with the latest MySQL 8 version porting many of the advantages MySQL 8 has over previous versions.

While this brings a lot of new interesting features for Aurora, what we are going to cover here is to see how DDLs behave when using the ONLINE option. With a quick comparison with what happens in MySQL 8 standard and with Group Replication. 

Tests

All tests were run on an Aurora instance r6g.large with secondary availability zone.
The test was composed by:

        4 connections

    • #1 to perform ddl
    • #2 to perform insert data in the table I am altering
    • #3 to perform insert data on a different table 
    • #4 checking the other node operations

In the Aurora instance, a sysbench schema with 10 tables and 5 million rows was created, just to get a bit of traffic. While the test table with 5ml rows as well was:

CREATE TABLE `windmills_test` (
  `id` bigint NOT NULL AUTO_INCREMENT,
  `uuid` char(36) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
  `millid` smallint NOT NULL,
  `kwatts_s` int NOT NULL,
  `date` date NOT NULL,
  `location` varchar(50) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
  `active` tinyint NOT NULL DEFAULT '1',
  `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `strrecordtype` char(3) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  KEY `IDX_millid` (`millid`,`active`),
  KEY `IDX_active` (`id`,`active`),
  KEY `kuuid_x` (`uuid`),
  KEY `millid_x` (`millid`),
  KEY `active_x` (`active`),
  KEY `idx_1` (`uuid`,`active`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8mb3 COLLATE=utf8_bin ROW_FORMAT=DYNAMIC

The executed commands:

Connection 1:
    ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE;
    ALTER TABLE windmills_test drop INDEX idx_1, ALGORITHM=INPLACE;
    
Connection 2:
 while [ 1 = 1 ];do da=$(date +'%s.%3N');mysql --defaults-file=./my.cnf -D windmills_large -e "insert into windmills_test  select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills4 limit 1;" -e "select count(*) from windmills_large.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done

Connection 3:
 while [ 1 = 1 ];do da=$(date +'%s.%3N');mysql --defaults-file=./my.cnf -D windmills_large -e "insert into windmills3  select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills4 limit 1;" -e "select count(*) from windmills_large.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done

Connections 4:
     while [ 1 = 1 ];do echo "$(date +'%T.%3N')";mysql --defaults-file=./my.cnf -h <secondary aurora instance> -D windmills_large -e "show full processlist;"|egrep -i -e "(windmills_test|windmills_large)"|grep -i -v localhost;sleep 1;done
     

Operations:
1) start inserts from connections
2) start commands in connections 4 - 5 on the other nodes
3) execute: DC1-1(root@localhost) [windmills_large]>ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE;

With this, what I was looking to capture is the operation impact in doing a common action as creating an Index. My desired expectation is to have no impact when doing operations that are declared “ONLINE” such as creating an index, as well as data consistency between nodes. 

Let us see what happened…

Results

While running the insert in the same table, performing the alter:

mysql>  ALTER TABLE windmills_test ADD INDEX idx_1 (`uuid`,`active`), ALGORITHM=INPLACE, LOCK=NONE;
Query OK, 0 rows affected (16.51 sec)
Records: 0  Duplicates: 0  Warnings: 0

Is NOT stopping the operation in the same table or any other table in the Aurora instance.

We can only identify a minimal performance impact:

[root@ip-10-0-0-11 tmp]# while [ 1 = 1 ];do da=$(date +'%s.%3N');mysql --defaults-file=./my.cnf -D windmills_large -e "insert into windmills_test  select null,uuid,millid,kwatts_s,date,location,active,time,strrecordtype from windmills4 limit 1;" -e "select count(*) from windmills_large.windmills_test;" > /dev/null;db=$(date +'%s.%3N'); echo "$(echo "($db - $da)"|bc)";sleep 1;done
.347
.283
.278
.297
.291
.317
.686  ← start
<Snip>
.512  ← end
.278
.284
.279

The secondary node is not affected at all, and this is because Aurora managed at storage level the data replication. Given that there is no such thing as Apply from Relaylog, as we have in standard MySQL asynchronous or data replicated with Group Replication.  

The result is that in Aurora 3, we can have zero impact index (or any other ONLINE/INSTANT) operation, with this I include the data replicated in the other instances for High Availability. 

If we compare this with Group replication (see blog):

                               	 	 	  GR         Aurora 3
Time on hold for insert for altering table   	~0.217 sec   ~0.523 sec
Time on hold for insert for another table   	~0.211 sec   ~0.205 sec 

However, keep in mind that MySQL with Group Replication will still need to apply the data on the Secondaries. This means that if your alter was taking 10 hours to build the index, the Secondary nodes will be misaligned with the Source for approximately another 10 hours. 

With Aurora 3 or with PXC, changes will be there when Source has completed the operation.    

What about Percona XtraDB Cluster (PXC)? Well, with PXC we have a different scenario:

                               	 	 	 PXC(NBO)     Aurora 3
Time on hold for insert for altering table   	~120 sec      ~0.523 sec
Time on hold for insert for another table   	~25  sec      ~0.205 sec

We will have a higher impact while doing the Alter operation, but the data will be on all nodes at the same time maintaining a high level of consistency in the cluster. 

Conclusion

Aurora is not for all use, and not for all budgets, however it has some very good aspects like the one we have just seen. The Difference between standard MySQL and Aurora is not in the time of holding/locking (aka operation impact), but on the HA aspects. If I have my data/structure on all my Secondary at the same time of the Source, I will feel much more comfortable, than having to wait an additional T time. 

This is why PXC in that case is a better alternative if you can afford locking time, if not, well Aurora 3 is your solution, just do your math properly and be conservative with the instance resources. 

 

A face to face with semi-synchronous replication

Details
Marco Tusa
MySQL
12 April 2022

Last month I performed a review of the Percona Operator for MySQL Server (https://www.percona.com/doc/kubernetes-operator-for-mysql/ps/index.html) which is still Alpha.  That operator is based on Percona Server and uses standard asynchronous replication, with the option to activate semi-synchronous  replication to gain higher levels of data consistency between nodes. 

The whole solution is composed as:

Additionally, Orchestrator (https://github.com/openark/orchestrator) is used to manage the topology and the settings to enable on the replica nodes, the semi-synchronous flag if required.
While we have not too much to say when using standard Asynchronous replication, I want to spend two words on the needs and expectations on the semi-synchronous (semi-sync) solution. 

A look into semi-synchronous

Difference between Async and Semi-sync.
Asynchronous:

The above diagram represents the standard asynchronous replication. This method is expected by design, to have transactions committed on the Source that are not present on the Replicas. The Replica is supposed to catch-up when possible.   

It is also important to understand that there are two steps in replication:

  • Data copy, which is normally very fast. The Data is copied from the binlog of the Source to the relay log on the Replica (IO_Thread).
  • Data apply, where the data is read from the relay log on the Replica node and written inside the database itself (SQL_Thread). This step is normally the bottleneck and while there are some parameters to tune, the efficiency to apply transactions depends on many factors including schema design. 

Production deployments that utilize the Asynchronous solution are typically designed to manage the possible inconsistent scenario given data on Source is not supposed to be on Replica at commit. At the same time the level of High Availability assigned to this solution is lower than the one we normally obtain with (virtually-)synchronous replication, given we may need to wait for the Replicare to catch-up the gap accumulated in the relay-logs before performing the fail-over.

Semi-sync:

The above diagram represents the Semi-sync replication method.The introduction of semi-sync adds a checking step on the Source before it returns the acknowledgement to the client.
This step happens at the moment of the data-copy, so when the data is copied from the Binary-log on Source to the Relay-log on Replica. 

This is important, there is NO mechanism to ensure a more resilient or efficient data replication, there is only an additional step, that tells the Source to wait a given amount of time for an answer from N replicas, and then return the acknowledgement or timeout and return to the client no matter what. 

This mechanism is introducing a possible significant delay in the service, without giving the 100% guarantee of data consistency. 

In terms of availability of the service, when in presence of high load, this method may lead the Source to stop serving the request while waiting for acknowledgements, significantly reducing the availability of the service itself.   

At the same time only acceptable settings for rpl_semi_sync_source_wait_point is AFTER_SYNC (default) because:  In the event of source failure, all transactions committed on the source have been replicated to the replica (saved to its relay log). An unexpected exit of the source server and failover to the replica is lossless because the replica is up to date.

All clear? No? Let me simplify the thing. 

  • In standard replication you have two moments (I am simplifying)
    • Copy data from Source to Replica
    • Apply data in the Replica node
  • There is no certification on the data applied about its consistency with the Source
  • With asynchronous the Source task is to write data in the binlog and forget
  • With semi-sync the Source writes the data on binlog and waits T seconds to receive acknowledgement from N servers about them having received the data.

To enable semi-sync you follow these steps:  https://dev.mysql.com/doc/refman/8.0/en/replication-semisync-installation.html

In short:

  • Register the plugins
  • Enable Source rpl_semi_sync_source_enabled=1
  • Enable Replica rpl_semi_sync_replica_enabled = 1
    • If replication is already running STOP/START REPLICA IO_THREAD

And here starts the fun, be ready for many “wait whaaat?”. 

What is the T and N I have just mentioned above?

Well the T is a timeout that you can set to avoid having the source wait forever for the Replica acknowledgement. The default is 10 seconds. What happens if the Source waits for more than the timeout? 
rpl_semi_sync_source_timeout controls how long the source waits on a commit for acknowledgment from a replica before timing out and reverting to asynchronous replication.

Careful of the wording here! The manual says SOURCE, so it is not that MySQL revert to asynchronous, by transaction or connection, it is for the whole server.

Now analyzing the work-log (see https://dev.mysql.com/worklog/task/?id=1720 and more in the references) the Source should revert to semi-synchronous as soon as all involved replicas are aligned again. 

However, checking the code (see  https://github.com/mysql/mysql-server/blob/beb865a960b9a8a16cf999c323e46c5b0c67f21f/plugin/semisync/semisync_source.cc#L844 and following), we can see that we do not have a 100% guarantee that the Source will be able to switch back. 

Also in the code:
But, it is not that easy to detect that the replica has caught up.  This is caused by the fact that MySQL's replication protocol is  asynchronous, meaning that if thesource does not use the semi-sync protocol, the replica would not send anything to thesource.

In all the runned tests the Source was not able to switch back. In short Source was moving out from semi-sync and that was forever, no rollback. Keep in mind that while we go ahead.

What is the N I mentioned above? It represents the number of Replicas that must provide the acknowledgement back. 

If you have a cluster of 10 nodes you may need to have only 2 of them involved in the semi-sync, no need to include them all. But if you have a cluster of 3 nodes where 1 is the Source, relying on 1 Replica only, is not really secure. What I mean here is that if you choose to be semi-synchronous to ensure the data replicates, having it enabled for one single node is not enough, if that node crashes or whatever, you are doomed, as such you need at least 2 nodes with semi-sync.

Anyhow, the point is that if one of the Replica takes more than T to reply, the whole mechanism stops working, probably forever. 

As we have seen above, to enable semi-sync on Source we manipulate the value of the GLOBAL variable rpl_semi_sync_source_enabled.

However if I check the value of rpl_semi_sync_source_enabled when the Source shift to simple Asynchronous replication because timeout:

select @@rpl_semi_sync_source_enabled;

select @@rpl_semi_sync_source_enabled;
+--------------------------------+
| @@rpl_semi_sync_source_enabled |
+--------------------------------+
|                              1 |
+--------------------------------+

As you can see the Global variable reports a value of 1, meaning that semi-sync is active also if not.

In the documentation it is reported that to monitor the semi-sync activity we should check for Rpl_semi_sync_source_status. Which means that you can have Rpl_semi_sync_source_status = 0 and rpl_semi_sync_source_enabled =1 at the same time.

Is this a bug? Well according to documentation:
When the source switches between asynchronous or semisynchronous replication due to commit-blocking timeout or a replica catching up, it sets the value of the Rpl_semi_sync_source_status or Rpl_semi_sync_source_status status variable appropriately. Automatic fallback from semisynchronous to asynchronous replication on the source means that it is possible for the rpl_semi_sync_source_enabled or rpl_semi_sync_source_enabled system variable to have a value of 1 on the source side even when semisynchronous replication is in fact not operational at the moment. You can monitor the Rpl_semi_sync_source_status or Rpl_semi_sync_source_status status variable to determine whether the source currently is using asynchronous or semisynchronous replication.

It is not a bug. However, because you documented it, it doesn’t change the fact this is a weird/unfriendly/counterintuitive way of doing, that opens the door to many, many possible issues. Especially given you know the Source may fail to switch semi-synch back.  

Just to close this part, we can summarize as follows:

  • You activate semi-sync setting a global variable
  • Server/Source can disable it (silently) without changing that variable 
  • Server will never restore semi-sync automatically
  • The way to check if semi-sync works is to use the Status variable
  • When Rpl_semi_sync_source_status = 0 and rpl_semi_sync_source_enabled =1 you had a Timeout and Source is now working in asynchronous replication
  • The way to reactivate semi-sync is to set rpl_semi_sync_source_enabled  to OFF first then rpl_semi_sync_source_enabled = ON. 
  • Replicas can be set with semi-sync ON/OFF but unless you do not STOP/START the replica_IO_THREAD the state of the variable can be inconsistent with the state of the Server.

What can go wrong?

Semi-synchronous is not seriously affecting the performance

Others had already discussed semi-sync performance in better details. However I want to add some color given the recent experience with our operator testing.
In the next graphs I will show you the behavior of writes/reads using Asynchronous replication and the same load with Semi-synchronous.
For the record the test was a simple Sysbench-tpcc test using 20 tables, 20 warehouses, 256 threads for 600 seconds.   

The one above indicates a nice and consistent set of load in r/w with minimal fluctuations. This is what we like to have. 

The graphs below, represent the exact same load on the exact same environment but with semi-sync activated and no timeout. 

Aside from the performance loss (we went from Transaction 10k/s to 3k/s), the constant stop/go imposed by the semi-sync mechanism has a very bad effect on the application behavior when you have many concurrent threads and high loads. I challenge any serious production system to work in this way.   

Of course results are inline with this yoyo game:

In the best case, when all was working as expected, and no crazy stuff happening I had something around the 60% loss. I am not oriented to see this as  a minor performance drop. 

But at least your data is safe

As already stated at the beginning the scope of semi-synchronous replication is to guarantee that the data in server A reaches server B before returning the OK to the application. 

In short, given a period of 1 second we should have minimal transactions in flight and limited transactions in the apply queue. While for standard replication (asynchronous), we may have … thousands. 

In the graphs below we can see two lines:

  • The yellow line represents the number of GTIDs “in flight” from Source to destination, Y2 axes.  In case of Source crash, those transactions are lost and we will have data loss.
  • The blue line represents the number of GTIDs already copied over from Source to Replica but not applied in the database Y1 axes. In case of Source crash we must wait for the Replica to process these entries, before making the node Write active, or we will have data inconsistency.

Asynchronous replication:

As expected we can see a huge queue in applying the transactions from relay-log, and some spike of transactions in flight. 

Using Semi-synchronous replication:

Yes, apparently we have reduced the queue and no spikes so no data loss.

But this happens  when all goes as expected, and we know in production this is not the normal.
What if we need to enforce the semi-sync but at the same time we cannot set the Timeout to ridiculous values like 1 week? 

Simple, we need to have a check that puts back the semi-sync as soon as it is silently disabled (for any reason).
However doing this without waiting for the Replicas to cover the replication gap, cause the following interesting effects:

Thousands of transactions queued and shipped with the result of having a significant increase of the possible data loss and still a huge number of data to apply from the relay-log. 

So the only possible alternative is to set the Timeout to a crazy value, However this can cause a full production stop in the case a Replica hangs or for any reason it disables the semi-sync locally. 

 

Conclusion

First of all I want to say that the tests on our operator using Asynchronous replication, shows a consistent behavior with the standard deployments in the cloud or premises.  It has the same benefits, like better performance and same possible issues as longer time to failover when it needs to wait a Replica to apply the relay-log queue. 

The semi-synchronous flag in the operator is disabled, and the tests I have done bring me to say “keep it like that!”. At least unless you know very well what you are doing and are able to deal with a semi-sync timeout of days.

I was happy to have the chance to perform these tests, because they gives me a way/time/need to investigate more on the semi-synchronous feature.
Personally, I was not convinced about the semi-synchronous replication when it came out, and I am not now. I never saw a less consistent and less trustable feature in MySQL as semi-sync.  

If you need to have a higher level of synchronicity in your database just go for Group Replication, or Percona XtraDB Cluster and stay away from semi-sync. 

Otherwise, stay on Asynchronous replication, which is not perfect but it is predictable.  

References

https://www.percona.com/blog/2012/01/19/how-does-semisynchronous-mysql-replication-work/

https://www.percona.com/blog/percona-monitoring-and-management-mysql-semi-sync-summary-dashboard/

https://www.percona.com/blog/2012/06/14/comparing-percona-xtradb-cluster-with-semi-sync-replication-cross-wan/

https://datto.engineering/post/lossless-mysql-semi-sync-replication-and-automated-failover

https://planetscale.com/blog/mysql-semi-sync-replication-durability-consistency-and-split-brains

https://percona.community/blog/2018/08/23/question-about-semi-synchronous-replication-answer-with-all-the-details/

https://dev.mysql.com/doc/refman/8.0/en/replication-semisync-installation.html

https://dev.mysql.com/worklog/task/?id=1720

https://dev.mysql.com/worklog/task/?id=6630

https://dev.mysql.com/worklog/task/?id=4398

https://github.com/mysql/mysql-server/blob/beb865a960b9a8a16cf999c323e46c5b0c67f21f/plugin/semisync/semisync_source.cc#L844

https://github.com/mysql/mysql-server/blob/beb865a960b9a8a16cf999c323e46c5b0c67f21f/plugin/semisync/semisync_source.cc#L881

More Articles ...

  1. Online DDL with Group Replication In MySQL 8.0.27
  2. A look into Percona XtraDB Cluster Non Blocking Operation for Online Schema Upgrade
  3. What if … MySQL’s repeatable reads cause you to lose money?
  4. MySQL on Kubernetes demystified
  5. Compare Percona Distribution for MySQL Operator VS AWS Aurora and standard RDS
  6. Boosting Percona MySQL Operator efficiency
  7. MySQL Static and Dynamic privileges (Part1)
  8. MySQL Static and Dynamic privileges (Part2)
  9. 260 (Thousands) thanks
  10. Percona Live 2021 - my agenda picks
  11. Inconsistent voting in PXC
  12. Online DDL with Group Replication Percona Server 8.0.22 (and MySQL 8.0.23)
  13. What you can do with Auto-failover and Percona Server Distribution (8.0.x)
  14. Percona Distribution for MySQL: High Availability with Group Replication solution
  15. Who is drop-in replacement of 
  16. Full read consistency within Percona Operator for MySQL
  17. Percona Operator for MySQL (HAProxy or ProxySQL?)
  18. Support for Percona XtraDB Cluster in ProxySQL (Part Two)
  19. Support for Percona XtraDB Cluster in ProxySQL (Part One)
  20. Aurora multi-Primary first impression
  21. MySQL Asynchronous SOURCE auto failover
  22. Using SKIP LOCK in MySQL For Queue Processing
  23. Deadlocks are our Friends
  24. Achieving Consistent Read and High Availability with Percona XtraDB Cluster 8.0 (Part 2)
  25. Achieving Consistent Read and High Availability with Percona XtraDB Cluster 8.0 (Part 1)
  26. Sysbench and the Random Distribution effect
  27. #StopTRUMP
  28. Dirty reads in High Availability solution
  29. My take on: Percona Live Europe and ProxySQL Technology Day
  30. Another time, another place… about next conferences
  31. A small thing that can save a lot of mess, meet: SET PERSIST
  32. My first impression on Mariadb 10.4.x with Galera4
  33. Reasoning around the recent conferences in 2019
  34. ProxySQL Native Support for Percona XtraDB Cluster (PXC)
  35. How Not to do MySQL High Availability: Geographic Node Distribution with Galera-Based Replication Misuse
  36. MySQL High Availability On-Premises: A Geographically Distributed Scenario
  37. MySQL 8: Load Fine Tuning With Resource Groups
  38. PXC loves firewalls (and System Admins loves iptables)
  39. No orange pants this year
  40. Leveraging ProxySQL with AWS Aurora to Improve Performance
  41. How to Implement ProxySQL with AWS Aurora
  42. ProxySQL server version impersonation
  43. ProxySQL Firewalling
  44. ProxySQL PXC Single Writer Mode and auto failover, re-bootstrap
  45. How ProxySQL deal with schema (and schemaname)
  46. How ProxySQL deal with schema (and schemaname) Long story
  47. Sweet and sour can become bitter
  48. Group-Replication, sweet & sour
  49. ProxySQL and Mirroring what about it?
  50. InnoDB Page Merging and Page Splitting
Page 4 of 22
  • Start
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • Next
  • End

Related Articles

  • The Jerry Maguire effect combines with John Lennon “Imagine”…
  • The horizon line
  • La storia dei figli del mare
  • My afterthoughts on teams compose by multi-cultural, multi-language elements, in consulting and support.
  • A dream on MySQL parallel replication
  • How to recover for deleted binlogs
  • How to Reset root password in MySQL
  • How and why tmp_table_size and max_heap_table_size are bounded.
  • How to insert information on Access denied on the MySQL error log
  • How to set up the MySQL Replication

Latest conferences

We have 3946 guests and no members online

login

Remember Me
  • Forgot your username?
  • Forgot your password?
Bootstrap is a front-end framework of Twitter, Inc. Code licensed under MIT License. Font Awesome font licensed under SIL OFL 1.1.