person to person car loans
cash finance vs running finance
america cash loans in chicago il
instant approval payday loans online
cash loans for self employed in south africa
residential loans ltd glasgow
how to get cash advance from bdo credit card
payday loan with emerald card
guaranteed payday loans uk
no credit check auto loans indianapolis
cash advance locations in massachusetts
easy internet jobs make money
bad credit mortgage loans guaranteed approval canada
can you go to jail for payday loans in nc
no credit check faxless payday loans online
moving to another country with student loans
where to cash insurance claim check
payday loans in beaverton oregon
payday loans independence ave kansas city mo
urgent same day loans for unemployed
need money now unemployed australia
sunday payday loans online
fast cash loans barbados
bank loans after bankruptcy
cash loans washington dc
fast cash jobs in singapore
how to get money quick on kardashian game
100 percent guaranteed payday loans
call of duty cash cow
long term loans for very bad credit
motorcycle financing for bad credit houston tx
travel trailer loans bad credit
payday loans that dont call your employer
no credit computer loan
how to get money quickly on moviestarplanet
why did bank of america sell loans to green tree
irs definition of shortterm midterm longterm loans
short term loan direct lender no credit check
payday loan online scams
title loans lafayette indiana
jdg group loans
speedy loan check cashing
check cashing locations kingston ny
mutual fund cash flows data
bad credit business loans guaranteed approval australia
rockland federal credit union online loan payment
fast approved personal loans bad credit
atm cash withdrawal limit in usa
need a quick loan with bad credit
bad credit loans in vineland nj
24 hour check cashing in columbus ohio
fast cash payday loans panama city
100 gurantee poor credit monthly installment loan
fast cash pawn jobs
how to apply for a home loan with bad credit in south africa
cash plus payday advance
loans to deposit ratio of banks
reviews for my cash now
cashnetusa long term loans
payday loan attorney california
how to preclose hdfc cash on call loan
quick and easy loans for the unemployed
no credit check unemployment loans with benefits
fast cash loans memphis tn
instant cash loans online for blacklisted
car title loans in lexington ky
fast cash loans bad credit
bank of america home loans simi valley
bad credit auto loans kansas
auto financing for bad credit in virginia
american payday express
lowest interest rates for personal loans in sri lanka
weekend payday loans fast
direct lender online installment loans no credit check
easy money payday loans online
america cash advance platinum
can i refinance my vehicle with bad credit
law school loans bad credit
bad credit personal loans guaranteed approval no bank account
payday loans no credit check no job verification
national payday loan relief
bank of america home loans servicing dallas tx
best 12 month payday loans
need a payday loan without a checking account
how to cash a check online bank of america
i need a auto loan with bad credit
quick cash loans and finance
ace payday loans lynnwood
how bad does a broken lease affect your credit
bad credit direct loans
payday loans for active duty military
savings and loans banks in lagos
standard chartered credit card offers on online shopping
how to get money fast and easy on sims free play
cashnet payday advance
how to get money fast on battle nations
loans in 48 hours
hr block tax anticipation loans 2014
internet payday loan laws in california
no credit check car dealerships
is h and r block christmas loans
loans for farmers in tamilnadu
fast bad credit loans in south africa
instant cash loans for bad credit unemployed
private lenders personal loans toronto
lawyers who help with payday loans
jp morgan chase debt consolidation loans
what is the minimum cash investment on a fha loan
car loans for bad credit or no credit
need immediate cash loan in bangalore
payday loans in austell ga
payday loans that accept unemployment benefits uk
payday loan yazoo city ms
rent a car with no credit card or credit check
how to get money fast on howrse
cash advance with american express card
how can a kid make money fast at home
fast online loans no credit check
cash now reviews
payday loans no credit check same day payout
homes to rent with no credit checks
watch free movies online no credit card information
no teletrack installment loans direct lenders
fast easy paypal cash
how to get a credit card with bad credit in nz
sameday payday advance
payday loans in kansas city missouri
allowance for bad debt credit balance
how to apply for a small personal loan with bad credit
final fantasy x how to make money fast
online loans bad credit no guarantor
weekend payday loans uk
title loans north miami
100 payday loans onlinedirect lenders database
i have bad credit and need a 5000 loan
payday loans on benefits no fees
lending sources for bad credit
bad credit personal loans canada guaranteed approval
payday loans in pawtucket rhode island
payday cash loans for people on benefits
payday loan express
instant personal loans no credit check cape town
instant loan for really bad credit
how to start a restaurant with no money and bad credit
how to get fast money in ninja saga
18 no credit need a loan
no credit check payday loans in houston tx
personal loans online bad credit
fast ways to get money legally
working only 2 weeks can i get a payday loan
low interest student loans without cosigner
origami money christmas tree instructions
pros and cons of payday loans uk
fast loans with bad credit and no bank account
apply for online loans instant approval
easy to get payday loans no credit check
check into cash topeka
personal loan no payday loans
fast easy cash loans south africa
mastercard credit card for bad credit
cash advance newburgh indiana
800 number for student loans
loans from lombard
best private loans for students with bad credit
ace cash express loan phone number
buy payday loan leads uk
short term loans cash advance loans instant cash loans peachy.co.uk
loans for unemployed no fees and no guarantor
channel 4 news payday loans
no money down bad credit semi truck loans
finance liposuction with bad credit
unsecured loans bad credit unemployed
direct lender no credit same day payday loan
rv loans for bad credit
how to get money fast on sims freeplay ipod
northwestern mutual cash value loans
same day loans up to 10000
car finance with bad credit liverpool
instant 12 month loans no credit check
complaints against bank of america home loans
american indian payday loan companies
cash advance in dover de
unsecured personal loans bad credit monthly payments
same day fast cash loans
how to loan money in owwa
1st time home loans for bad credit
cash converters loans woden phone number
online military loans with no credit checks
personal loans in charleston sc
how to make money fast in runescape 3
financing furniture with bad credit
12 month short term loans
i need money fast ideas
unsecured line of credit rates canada 2014
business checking account with bad personal credit
instant loans for low income earners
bad credit payday loans toronto
agents for personal loans in bangalore
car loans with bad credit and on centrelink
ez payday loan topeka
 
 
Home MySQL Blogs
MySQL

My MySQL tipsvalid-rss-rogers




Xtrabackup for Dummy PDF Stampa E-mail
Scritto da Marco Tusa   
Martedì 22 Gennaio 2013 04:48

or summary  for lazy guys on how to use it...

 

I know that a lot has being written around Xtrabackup, and good documentation can be found on the Percona web site.

Anyhow I had to write a summary and clear procedure for my teams, so I choose to share those with all, given it could provide benefit to all community.

 

Each major topic is associate to a checklist, that need to be follow to prevent mistakes.

Overview

Xtrabackup is a hot backup tool, that allow you to perform backup on InnoDB with very limited impact on the running transactions/operations.
In order to do this xtrabackup, copy the IBD files AND it takes information out from the REDO log, from a starting point X.
This information needs to be then apply to the datafiles, before restarting the MySQL server on restore.
In short the Backup operation is compose by 2 main phases:
  1. Copy of the file
  2. Copy of the delta modified from REDO log.
Another phase is the "prepare" phase where the REDO log modifications, must apply.
This phase can be done as soon as the backup is complete, if the files are not STREAM (we will see it later), or must be done on Restore if STREAM was use.
Xtrabackup is compose by two main parts, the innobackupex script wrapper, and the xtrabackup.
the Xtrabackup binary has three different version:
- xtrabackup
- xtrabackup_51
- xtrabackup_55
Binary change in respect to the mysql binary version and are automatically selected from innobackupex as follow:
MySQL 5.0.* - xtrabackup_51
MySQL 5.1.* - xtrabackup_51
MySQL 5.1.* with InnoDB plugin - xtrabackup
Percona Server >= 11.0 - xtrabackup
MySQL 5.5.* - xtrabackup_55
It is important to note that while the backup of InnoDB tables is taken with minimal impact, the backup of MyISAM still require a full tables lock.
The full process can be describe as follow:
  1. check connection to MySQL
  2. start the xtrabackup as child process
  3. wait untill xtrabackup suspend the process
  4. connect to mysql
  5. if sever is a slave wait for replication to catch-up
  6. if server is a master it returns right away
  7. flush tables and acquire a read lock (unless explicitly ask in the settings to DO NOT get lock)
  8. write slave information
  9. perform physical write of the files
  10. resume xtrabackup process
  11. unlock tables
  12. close connection to mysql
  13. copy last LRU information
  14. write backup status report

User and Grants

Backup user SHOULD not be a common user or a DBA user but it should be one created for this operation as below:
1
2
3
4
5
CREATE USER 'backup'@'localhost' IDENTIFIED BY 'bckuser123';
REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'backup'@'localhost';
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup'@'localhost';
FLUSH PRIVILEGES;
 

 

How to invoke the Xtrabackup in standard easy way.

This is the easier way to take a FULL backup using Xtrabackup.

/usr/bin/innobackupex-1.5.1 --defaults-file=<path> --slave-info --user=<username> --password=<secret>   /path/to/destination/backup/folder

ie

/usr/bin/innobackupex-1.5.1 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123   /home/mysql/backup/

This will produce a full uncompress backup.
root@mysqlt3:/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02# ll
total 200088
drwxr-xr-x 15 root root      4096 Dec 21 14:41 ./
drwxr-xr-x  3 root root      4096 Dec 21 14:46 ../
-rw-r--r--  1 root root       263 Dec 21 14:32 backup-my.cnf
-rw-r-----  1 root root 104857600 Dec 21 14:32 ibdata1
drwxr-xr-x  2 root root      4096 Dec 21 14:41 mysql/
drwxr-xr-x  2 root root      4096 Dec 21 14:41 performance_schema/
drwx------  2 root root      4096 Dec 21 14:41 security/
drwx------  2 root root      4096 Dec 21 14:41 test/
drwx------  2 root root      4096 Dec 21 14:41 test_audit/
drwx------  2 root root      4096 Dec 21 14:41 timstaging/
drwx------  2 root root      4096 Dec 21 14:41 timtags/
drwxr-xr-x  2 root root      4096 Dec 21 14:41 world/
-rw-r--r--  1 root root        13 Dec 21 14:41 xtrabackup_binary
-rw-r--r--  1 root root        26 Dec 21 14:41 xtrabackup_binlog_info
-rw-r-----  1 root root        85 Dec 21 14:41 xtrabackup_checkpoints
-rw-r-----  1 root root  99912192 Dec 21 14:41 xtrabackup_logfile
-rw-r--r--  1 root root        53 Dec 21 14:41 xtrabackup_slave_info
backup-my.cnf <--------------- very essential version of the my.cnf with innodb information
ibdata1<---------------------- Main tablespace
mysql/ <----------------------
world/<----------------------- DBs ... with files copy in
xtrabackup_binary <----------- contains the name of the xtrabackup binary used
xtrabackup_binlog_info <------ Binary log information (name/position)
xtrabackup_checkpoints <------ Information regarding the LSN position and range
xtrabackup_logfile <---------- File containing the delta of the modifications
xtrabackup_slave_info <------- Slave information (if slave)
 

 

In this case given it is NOT using streaming and it is not compress, you can prepare the file right away:

innobackupex --use-memory=1G --apply-log /home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02

 

After few operations you will see:
121221 15:57:04  InnoDB: Waiting for the background threads to start
121221 15:57:05 Percona XtraDB (http://www.percona.com) 1.1.8-20.1 started; log sequence number 30312932364

 

[notice (again)]
If you use binary log and don't use any hack of group commit,
the binary log position seems to be:
InnoDB: Last MySQL binlog file position 0 213145807, file name /home/mysql/instances/mtest1/binlog.000011
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
121221 15:57:05  InnoDB: Starting shutdown...
121221 15:57:09  InnoDB: Shutdown completed; log sequence number 30312932364
121221 15:57:09  innobackupex: completed OK!

 

When done the files needs to be put back in the right place:

innobackupex --defaults-file=/home/mysql/instances/mtest1/my.cnf --copy-back `pwd`

 

Note be sure that destination is empty both DATA and IB_LOGS.
If you can just rename the directories, and create new ones.
When copy is over:
innobackupex: Starting to copy InnoDB system tablespace
innobackupex: in '/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02'
innobackupex: back to original InnoDB data directory '/home/mysql/instances/mtest1/data'
innobackupex: Copying '/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02/ibdata1' to '/home/mysql/instances/mtest1/data/ibdata1'
innobackupex: Starting to copy InnoDB log files
innobackupex: in '/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02'
innobackupex: back to original InnoDB log directory '/home/mysql/logs/mtest1/innodblog'
innobackupex: Finished copying back files.
121221 16:41:38  innobackupex: completed OK!

 

Modify the permission on the data directory:

chown -R mysql:mysql /home/mysql/instances/mtest1;

 

Then restart MySQL, you will see that mysql will recreate the iblogs as well, given we have removed them but this is ok because we have already apply all the changes.
121221 16:44:08 mysqld_safe Starting mysqld daemon with databases from /home/mysql/instances/mtest1/data
121221 16:44:09 [Note] Plugin 'FEDERATED' is disabled.
121221 16:44:09 InnoDB: The InnoDB memory heap is disabled
121221 16:44:09 InnoDB: Mutexes and rw_locks use InnoDB's own implementation
121221 16:44:09 InnoDB: Compressed tables use zlib 1.2.3
121221 16:44:09 InnoDB: Using Linux native AIO
121221 16:44:09 InnoDB: Initializing buffer pool, size = 1.0G
121221 16:44:09 InnoDB: Completed initialization of buffer pool
121221 16:44:09  InnoDB: Log file /home/mysql/logs/mtest1/innodblog/ib_logfile0 did not exist: new to be created
InnoDB: Setting log file /home/mysql/logs/mtest1/innodblog/ib_logfile0 size to 100 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100
...
121221 16:44:15 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
121221 16:44:15  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Last MySQL binlog file position 0 213145807, file name /home/mysql/instances/mtest1/binlog.000011
121221 16:44:17  InnoDB: Waiting for the background threads to start
21221 16:44:18 InnoDB: 1.1.8 started; log sequence number 30312933388
121221 16:44:18 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3310
121221 16:44:18 [Note]   - '0.0.0.0' resolves to '0.0.0.0';
121221 16:44:18 [Note] Server socket created on IP: '0.0.0.0'.
121221 16:44:18 [Note] Event Scheduler: Loaded 0 events
121221 16:44:18 [Note] /home/mysql/templates/mysql-55p/bin/mysqld: ready for connections.
Version: '5.5.27-log'  socket: '/home/mysql/instances/mtest1/mysql.sock'  port: 3310  MySQL Community Server (GPL)

 

Checking the content we will have all data back:
1
2
3
4
5
6
7
8
9
+--------------+--------+--------+----------+----------+-----------+----------+
| TABLE_SCHEMA | ENGINE | TABLES | ROWS     | DATA (M) | INDEX (M) | TOTAL(M) |
+--------------+--------+--------+----------+----------+-----------+----------+
| test         | InnoDB |     51 |  9023205 |  5843.14 |   1314.62 |  7157.76 |
| test         | NULL   |     51 |  9023205 |  5843.14 |   1314.62 |  7157.76 |
| test_audit   | InnoDB |      9 |  1211381 |   658.54 |    230.54 |   889.09 |
| test_audit   | NULL   |      9 |  1211381 |   658.54 |    230.54 |   889.09 |
| NULL         | NULL   |     61 | 10234586 |  6501.68 |   1545.17 |  8046.86 |
+--------------+--------+--------+----------+----------+-----------+----------+

7 rows in set (6.92 sec)


 

How to BACKUP the Xtrabackup Using compression

One of the pain in backup compression is the compression process, that could take very long time and could be very inefficient.
We have choose to use pigz which is a tool for Parallel Implementation of gzip.
This combine with the --stream option of xtrabackup generate very compact backup files in shorter time.
The only thing to remember is that YOU CANNOT apply the logs on streaming, so you MUST do it after in the Restore phase.
So given our database as before now we have to be sure that pigz is in place:
#pigz --version
pigz 2.1.6

 

If this is returned (or another version) all ok.
Otherwise  you need to install it:

apt-get install pigz (debian)

yum install pigz (centos)

To execute the backup we will just change the last part of our command as follow:

./innobackupex-1.5.1 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123  --stream=tar ./ | pigz -p4 - > /home/mysql/backup/2012_12_21_1300/full_mtest1.tar.gz

Once the copy is over you will have a file like this:
drwxr-xr-x  3 root root 4.0K Dec 21 17:08 ./
drwxr-xr-x  3 root root 4.0K Dec 21 14:16 ../
drwxr-xr-x 15 root root 4.0K Dec 21 16:31 2012-12-21_14-32-02/
-rw-r--r--  1 root root 737M Dec 21 17:18 full_mtest1.tar.gz <-------------

 

The whole process on a descktop machine takes:
121221 17:09:52  innobackupex-1.5.1: Starting mysql with options:  --defaults-file='/home/mysql/instances/mtest1/my.cnf' --password=xxxxxxxx --user='backup' --unbuffered --
121221 17:18:29  innobackupex-1.5.1: completed OK!

 

Less then 10 minutes for 8GB data, not excellent but it was running on a vere low level machine.
The file is then ready to be archive, or in our case to be copy over the slave for recovery.

How to RESTORE using Xtrabackup from stream

Once we have the file on the target machine we have to expand it.
Very important her is to use the  -i option, this because otherwise the blocks of zeros in archive  will be read as EOF (End Of File), and your set of files will be a mess.
So the string will be something like:

tar -i -xzf full_mtest1.tar.gz

Again after the operation we will have:
-rw-r--r-- 1 root root       269 Dec 21 17:10 backup-my.cnf
-rw-rw---- 1 root root 104857600 Dec 21 17:12 ibdata1
drwxr-xr-x 2 root root      4096 Dec 21 17:40 mysql
drwxr-xr-x 2 root root      4096 Dec 21 17:39 performance_schema
drwxr-xr-x 2 root root      4096 Dec 21 17:39 security
drwxr-xr-x 2 root root      4096 Dec 21 17:39 test
drwxr-xr-x 2 root root      4096 Dec 21 17:39 test_audit
drwxr-xr-x 2 root root      4096 Dec 21 17:39 timstaging
drwxr-xr-x 2 root root      4096 Dec 21 17:39 timtags
drwxr-xr-x 2 root root      4096 Dec 21 17:39 world
-rw-r--r-- 1 root root        13 Dec 21 17:18 xtrabackup_binary
-rw-r--r-- 1 root root        26 Dec 21 17:18 xtrabackup_binlog_info
-rw-rw---- 1 root root        85 Dec 21 17:18 xtrabackup_checkpoints
-rw-rw---- 1 root root 282056704 Dec 21 17:18 xtrabackup_logfile
-rw-r--r-- 1 root root        53 Dec 21 17:18 xtrabackup_slave_info

 

Note this time the information about the binary logs will be CRUCIAL.
Move or delete the old data directory and ib_log.
We have to apply the logs so assuming we have our file set in /home/mysql/recovery:

innobackupex --use-memory=1G --apply-log /home/mysql/recovery

 

Check CAREFULLY the ouput of the process if everithing is fine you will have something like this:
121221 17:52:17  InnoDB: Starting shutdown...
121221 17:52:21  InnoDB: Shutdown completed; log sequence number 30595333132
121221 17:52:21  innobackupex: completed OK!
 

 

Otherwise you must investigate, the most common issues are:
  • forgot -i in the expand
  • space on disk
When copy is over:
121221 18:04:48  innobackupex: completed OK!

 

Change the permissions

chown -R mysql:mysql /home/mysql/instances/mtestslave

 

Start the mysql server.
Again check the mysql error log:
121221 18:06:38 mysqld_safe Starting mysqld daemon with databases from /home/mysql/instances/mtestslave/data
121221 18:06:39 [Note] Plugin 'FEDERATED' is disabled.
121221 18:06:39 InnoDB: The InnoDB memory heap is disabled
121221 18:06:39 InnoDB: Mutexes and rw_locks use InnoDB's own implementation
121221 18:06:39 InnoDB: Compressed tables use zlib 1.2.3
121221 18:06:39 InnoDB: Using Linux native AIO
121221 18:06:39 InnoDB: Initializing buffer pool, size = 1.0G
121221 18:06:39 InnoDB: Completed initialization of buffer pool
121221 18:06:39  InnoDB: Log file /home/mysql/logs/mtestslave/innodblog/ib_logfile0 did not exist: new to be created
InnoDB: Setting log file /home/mysql/logs/mtestslave/innodblog/ib_logfile0 size to 10 MB
InnoDB: Database physically writes the file full: wait...
121221 18:06:40 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
121221 18:06:40  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Last MySQL binlog file position 0 150497896, file name /home/mysql/instances/mtest1/binlog.000001
121221 18:06:42  InnoDB: Waiting for the background threads to start
121221 18:06:43 InnoDB: 1.1.8 started; log sequence number 30595333644
121221 18:06:43 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3311
121221 18:06:43 [Note]   - '0.0.0.0' resolves to '0.0.0.0';
121221 18:06:43 [Note] Server socket created on IP: '0.0.0.0'.
121221 18:06:43 [Note] Event Scheduler: Loaded 0 events
121221 18:06:43 [Note] /home/mysql/templates/mysql-55p/bin/mysqld: ready for connections.
Version: '5.5.27-log'  socket: '/home/mysql/instances/mtestslave/mysql.sock'  port: 3311  MySQL Community Server (GPL)
 

 

And now is time to log in check the data set AND fix replication.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
root@localhost [(none)]> show schemas;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| security           |
| test               |
| test_audit         |
| world              |
+--------------------+
13 rows in set (0.04 sec)
root@localhost [(none)]> SELECT TABLE_SCHEMA, ENGINE, COUNT(1) as 'TABLES', sum(TABLE_ROWS) as 'ROWS',
TRUNCATE(sum(DATA_LENGTH)/pow(1024,2),2) as 'DATA (M)',
TRUNCATE(sum(INDEX_LENGTH)/pow(1024,2),2) as 'INDEX (M)',
TRUNCATE((sum(DATA_LENGTH)+sum(INDEX_LENGTH))/pow(1024,2),2) AS 'TOTAL(M)'
FROM information_schema.tables       WHERE TABLE_SCHEMA <> 'information_schema'
AND TABLE_SCHEMA <> 'mysql'     AND TABLE_SCHEMA not like 'avail%'
AND TABLE_SCHEMA <> 'maatkit'            AND TABLE_TYPE = 'BASE TABLE'
GROUP BY TABLE_SCHEMA, ENGINE      WITH ROLLUP;
+--------------------+--------------------+--------+----------+----------+-----------+----------+
| TABLE_SCHEMA       | ENGINE             | TABLES | ROWS     | DATA (M) | INDEX (M) | TOTAL(M) |
+--------------------+--------------------+--------+----------+----------+-----------+----------+
| performance_schema | PERFORMANCE_SCHEMA |     17 |    23014 |     0.00 |      0.00 |     0.00 |
| performance_schema | NULL               |     17 |    23014 |     0.00 |      0.00 |     0.00 |
| security           | InnoDB             |      1 |  1454967 |   170.73 |     60.75 |   231.48 |
| security           | NULL               |      1 |  1454967 |   170.73 |     60.75 |   231.48 |
| test               | InnoDB             |     51 |  9298913 |  6058.39 |   1347.78 |  7406.17 |
| test               | NULL               |     51 |  9298913 |  6058.39 |   1347.78 |  7406.17 |
| test_audit         | InnoDB             |      9 |  1189343 |   685.56 |    236.56 |   922.12 |
| test_audit         | NULL               |      9 |  1189343 |   685.56 |    236.56 |   922.12 |
| world              | MyISAM             |      3 |     5302 |     0.35 |      0.06 |     0.42 |
| world              | NULL               |      3 |     5302 |     0.35 |      0.06 |     0.42 |
| NULL               | NULL               |    227 | 11971539 |  6916.70 |   1645.74 |  8562.44 |
+--------------------+--------------------+--------+----------+----------+-----------+----------+
 

 

So far so good.
Now is time to modify the slave.
First take the current status:
1
2
3
root@localhost [(none)]> SHOW slave STATUS\G
Empty SET (0.00 sec)
root@localhost [(none)]>

 

Ok nothing, good.
Assign the master AND the log file and position from xtrabackup_binlog_info.
1
2
cat xtrabackup_binlog_info
binlog.000001    150497896

 

Prepare the command as:

change master to master_host='192.168.0.3', master_port=3310,master_user='replica',master_password='xxxx', master_log_file='binlog.000001',master_log_pos=150497896;

Check again:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
root@localhost [(none)]> SHOW slave STATUS\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.0.3
Master_User: replica
Master_Port: 3310
Connect_Retry: 60
Master_Log_File: binlog.000001
Read_Master_Log_Pos: 150497896
Relay_Log_File: mysql-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: binlog.000001
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 150497896
Relay_Log_Space: 107
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
1 row IN SET (0.00 sec)
root@localhost [(none)]>


Perfect start the slave:

slave start;

AND CHECK again:


root@localhost [(none)]> SHOW slave STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting FOR master TO send event
Master_Host: 192.168.0.3
Master_User: replica
Master_Port: 3310
Connect_Retry: 60
Master_Log_File: binlog.000001
Read_Master_Log_Pos: 206843593
Relay_Log_File: mysql-relay-bin.000002
Relay_Log_Pos: 22872
Relay_Master_Log_File: binlog.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 150520518
Relay_Log_Space: 56346103
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 30
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 3310
1 row IN SET (0.00 sec)

 

Ok we have some delay as expected by all is running as it should.
Our server is up and running.

How to do INCREMENTAL BACKUP with Xtrabackup 

 

Incremental backup works in a different way.
To understand it correctly we need to remember that InnoDB pages have a sequence number LSN (Log Sequence Number), given that, each incremental backup starts from the previous stored LSN.

 

Incremental backup must have a first FULL Backup as base, then each following incremental, will be stored in a different directory (by timestamp).

To restore the incremental backup the full set of incremental, from the BASE to the last point in time, need to be apply.
So if we have the Full Backup done on Monday, and incremental are taken every day, if we need to restore the full set on Friday, we must apply the logs on the BASE (Monday) following the chronological order, Monday (base), then Tuesday, Wednesday, Thursday, Friday.

Only at that point we will have the full set of data, that can replace the one we were having on the server.

To remember that this works only for InnoDB, other storage engines like MyISAM are copy in full every time.

Let this work without compression

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123   /home/mysql/backup/

The new directory 2013-01-10_13-07-24 is the BASE.

Checking the files inside we can check the LSN position:
root@tusacentral03:/home/mysql/backup/2013-01-10_13-07-24# cat xtrabackup_checkpoints
backup_type = full-backuped
from_lsn = 0
to_lsn = 32473279827
last_lsn = 32473279827
Last LSN is 32473279827

 

As exercise let us do TWO incremental backup starting from this base, but first add some data...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
root@localhost [test]> SHOW processlist;
+-----+--------+---------------------------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+
| Id  | User   | Host                      | db   | Command | Time | State  | Info                                                                                                 |
+-----+--------+---------------------------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+
|  87 | root   | localhost                 | test | Query   |    0 | NULL   | SHOW processlist                                                                                     |
|  92 | stress | tusacentral01.LOCAL:37293 | test | Sleep   |    0 |        | NULL                                                                                                 |
|  94 | stress | tusacentral01.LOCAL:37296 | test | Query   |    0 | UPDATE | INSERT INTO tbtest30 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),731188002,"hd rsg  |
|  95 | root   | localhost:37295           | test | Query   |    0 | update | INSERT INTO test_audit.tbtest4 values(NEW.autoInc,NEW.a,NEW.uuid,NEW.b,NEW.c,NEW.counter,NEW.time,NE |
|  96 | stress | tusacentral01.local:37298 | test | Query   |    0 | NULL   | COMMIT                                                                                               |
|  97 | root   | localhost:37299           | test | Query   |    0 | update | INSERT INTO test_audit.tbtest4 values(NEW.autoInc,NEW.a,NEW.uuid,NEW.b,NEW.c,NEW.counter,NEW.time,NE |
|  98 | stress | tusacentral01.local:37300 | test | Query   |    0 | update | insert INTO tbtest15 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),598854171,"usfcrgl |
|  99 | root   | localhost:37301           | test | Query   |    0 | UPDATE | INSERT INTO test_audit.tbtest4 VALUES(NEW.autoInc,NEW.a,NEW.uuid,NEW.b,NEW.c,NEW.counter,NEW.time,NE |
| 100 | stress | tusacentral01.LOCAL:37302 | test | Query   |    0 | UPDATE | INSERT INTO tbtest15 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),22723485,"vno ehhr |
| 101 | stress | tusacentral01.local:37303 | test | Query   |    0 | update | insert INTO tbtest1 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),991063177,"nqdcogeu |
| 102 | stress | tusacentral01.LOCAL:37304 | test | Query   |    0 | UPDATE | INSERT INTO tbtest1 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),86481207,"sdfabnogn |
| 103 | stress | tusacentral01.local:37305 | test | Query   |    0 | NULL   | COMMIT                                                                                               |
+-----+--------+---------------------------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+
12 rows in set (0.00 sec)

 

Now let us create the first incremental backup:

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental --incremental-basedir=/home/mysql/backup/2013-01-10_13-07-24 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123   /home/mysql/backup/

 

After all the process is complete, we will have TWO directories:
total 20
drwxr-xr-x  5 root  root  4096 Jan 10 13:30 ./
drwxr-xr-x 18 mysql mysql 4096 Dec 28 12:16 ../
drwxr-xr-x 15 root  root  4096 Jan 10 13:17 2013-01-10_13-07-24/
drwxr-xr-x 15 root  root  4096 Jan 10 13:34 2013-01-10_13-30-43/ <-------- the last one is the Incremental
 

 

I was inserting data mainly on the TEST schema, and as you can see test is the one that HAS more data in, which represent the DELTA:
root@tusacentral03:/home/mysql/backup/2013-01-10_13-30-43# du -sh *
4.0K    backup-my.cnf
4.5M    ibdata1.delta
4.0K    ibdata1.meta
1.5M    mysql
212K    performance_schema
18M        security <---------------------------------
1.2G    test <---------------------------------
173M    test_audit <---------------------------
488K    world
4.0K    xtrabackup_binary
4.0K    xtrabackup_binlog_info
4.0K    xtrabackup_checkpoints
4.0K    xtrabackup_logfile
4.0K    xtrabackup_slave_info
 

 

On top of the usual files, in the schema directory and per table I will find some additional inormations inside the tablexyz.ibd.meta file
root@tusacentral03:/home/mysql/backup/2013-01-10_13-30-43/test# cat tbtest1.ibd.meta
page_size = 16384
zip_size = 0
space_id = 1983

 

Checking the file xtrabackup_checkpoints you will see the delta related to LSN
root@tusacentral03:/home/mysql/backup/2013-01-10_13-30-43# cat xtrabackup_checkpoints
backup_type = incremental
from_lsn = 32473279827 <------------ starting point
to_lsn = 33215076229   <------------ End point
last_lsn = 33215076229

 

Let us add other data and take another incremeental.
root@tusacentral03:/opt/percona-xtrabackup-2.0.4/bin# /opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental \
--incremental-basedir=/home/mysql/backup/2013-01-10_13-30-43/ \
--defaults-file=/home/mysql/instances/mtest1/my.cnf  \
--slave-info --user=backup --password=bckuser123   /home/mysql/backup/

 

There is a HUGE difference from the previous command, the BASEDIR change, and must be the las incremental.
Given this is not always possible it is good practices when working with scripts to store the LAST LSN in the xtrabackup_checkpoints and pass it as parameter with:

--incremental-lsn=xyz

This is the more elegant and flexible way.

Ok NOW we have 3 Directory:
drwxr-xr-x 15 root  root  4096 Jan 10 13:17 2013-01-10_13-07-24/
drwxr-xr-x 15 root  root  4096 Jan 10 13:34 2013-01-10_13-30-43/ <--------- First incremental
drwxr-xr-x 15 root  root  4096 Jan 10 14:02 2013-01-10_13-57-04/ <--------- Second incremental

 

To have a full backup we have now to rebuild the set from the BASE then First incremental then Second Incremental, to do so we need to apply the changes but NOT the rollback operation.
If we forgot and perform ALSO the rollback, we will not be able to continue applying the incremental backups.
To do so there are two ways, explicit and implicit:
  • Explicit --apply-log --redo-only
  • Implicit --apply-log-only
I like the Explicit because you know exactly what you pass also if this can be more verbose, so my commands will be:

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_13-07-24

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_13-07-24 --incremental-dir=/home/mysql/restore/2013-01-10_13-30-43

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_13-07-24 --incremental-dir=/home/mysql/restore/2013-01-10_13-57-04

Once done the BASE directory will contains the up to date information including the binary log position:
root@tusacentral03:/home/mysql/restore/2013-01-10_13-07-24# cat xtrabackup_binlog_info
binlog.000005    275195253     <------------ Original from Base
root@tusacentral03:/home/mysql/restore/2013-01-10_13-07-24# cat ../../backup/2013-01-10_13-07-24/xtrabackup_binlog_info
binlog.000003    322056528     <------------ Up to date from incremental
 

 

It is now time to have all finalize it:

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log /home/mysql/restore/2013-01-10_13-07-24

 

At this point we just need to copy back on the slave the content of BASE directory /home/mysql/restore/2013-01-10_13-07-24, and change the permissions.
[root@tusacentral07 data]# scp -R 
 Questo indirizzo e-mail è protetto dallo spam bot. Abilita Javascript per vederlo.
 .0.3:/home/mysql/restore/2013-01-10_13-07-24/*  .
[root@tusacentral07 data]# sudo chown -R mysql:mysql .
 

 

At this point if this is a slave we just to setup the replication from the last binlog and positon as usual, otherwise all done and we can restart the server.

 

Incremental with compression

To perform the incremental + compression the process is the same but instead tar we need to use xbstream, for documentation I have add the --incremental-lsn with the value from the latest backup,
at this point add some data, and take the backup again.
Given I don't have the previous set of FULL + Incremental1 + Incremental2 UNPREPARED, I will take again 1 full and to compress incremental.

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental\

--incremental-lsn=34020868857 \

--defaults-file=/home/mysql/instances/mtest1/my.cnf \

 --slave-info --user=backup --password=bckuser123 \

 --extra-lsndir=/home/mysql/backup/ \

--stream=xbstream --parallel=4 ./ |pigz -p4 - > /home/mysql/backup/incremental_2013_01_10_19_05.gz

To note is the parameter  --extra-lsndir which allow you to specify an additional location for the LSN file position,
this is very important because it needs to be "grep" for the next incremental backup.
Like:
grep last_lsn xtrabackup_checkpoints|awk -F' = ' '{print $2}'
34925032837

 

and the parameter --parallel=4 to implement multi thread streaming
So next will be:
/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental \
--incremental-lsn=34925032837 \
--defaults-file=/home/mysql/instances/mtest1/my.cnf \
--slave-info --user=backup --password=bckuser123 \
--extra-lsndir=/home/mysql/backup/ \
--stream=xbstream 

--parallel=4 ./ |pigz -p4 - > /home/mysql/backup/incremental_2013_01_11_11_35.gz

 

Once done taking again the LSN value it will be 35209627102
At this point we have a compress incremental backup using xbstream and pigz.
Point is can we restore it correctly?
copy all the files to the resore area/server
root@tusacentral03:/home/mysql/backup# ll
total 631952
drwxr-xr-x  3 root  root       4096 Jan 11 11:36 ./
drwxr-xr-x 19 mysql mysql      4096 Jan 10 15:01 ../
drwxr-xr-x 15 root  root       4096 Jan 10 17:27 full_2013_01_10_18_54.gz
-rw-r--r--  1 root  root  360874358 Jan 11 11:25 incremental_2013_01_10_19_05.gz
-rw-r--r--  1 root  root  286216063 Jan 11 11:41 incremental_2013_01_11_11_35.gz
-rw-r--r--  1 root  root         93 Jan 11 11:41 xtrabackup_checkpoints
 

 

then to expand it:

pigz -d -c full_2013_01_10_18_54.gz | xbstream -x -v

create 2 directory:

mkdir 2013_01_10_19_05

mkdir 2013_01_11_11_35

Then

pigz -d -c incremental_2013_01_10_19_05.gz | xbstream -x -v

pigz -d -c incremental_2013_01_11_11_35.gz | xbstream -x -v

 

After that the procedure will be the same.

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_17-15-27

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_17-15-27 --incremental-dir=/home/mysql/restore/2013_01_10_19_05

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_17-15-27 --incremental-dir=/home/mysql/restore/2013_01_11_11_35

 

Finalize the process

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log /home/mysql/restore/2013-01-10_17-15-27

 

Copy in the production location:
To remove possible not needed files :

find . -name "*.TR*" -exec  \rm -v '{}' \;

 

Assign correct grants to mysql user

chown -R mysql:mysql data

restart and if slave set the right binlog and position as before
Done!

 

Incremental with compression and NetCat

There are two possible ways to perform the copy with NetCat:
  • one is "on the fly" means that the stream instead being direct to a local file it is directly push on the "Recovery" server.
  • the other is to write the file then push it to the "Recovery" server.
Using the the "on the fly" is in my opinion conceptually dangerous.
This because a backup operation should be as more solid as possible.
Having the stream directed to the final server is opening to possible issue at any network glitch.
Any network flotation could affect the whole backup, and there could be also possible scenario where a full transmitted backup will result corrupted.
This because IF a network issue happen during the transfer the process on the source or destination server, the one DOING the backup or the one receiving can crash or hung.
All the above impose a sanity check on the process and on the final result, to be sure that in case of failure the backup will be take again, or at least there will be awareness about the issue.

 

Needs to be say that the process is not so fragile when dealing with small amount of data, but it could become much more concerning when dealing with Gigs because resource allocation limit on the source machine.

The NetCat solution see two elements in our case:

  • server (sender)
  • client (receiver)
This is valid in our case but needs mention that the server can also get input from the client, but this is not a topic here.

The on the fly

The backup process is suppose to be launched on the server with the following statement:
/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental  --incremental-lsn=35209627102 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123 --extra-lsndir=/home/mysql/backup/ --stream=xbstream --parallel=4 ./ |pigz -p4 - | nc -l 6666
while on client :
nc 192.168.0.3 6666|pv -trb > /home/mysql/recovery/incremental_2013_01_14_12_05.gz
So the only difference is the add of the NetCat commands and obviously the need to have it done on the client.
Once the process is over, the expand can be done as usual:
pigz -d -c incremental_2013_01_14_12_05.gz | xbstream -x -v

Two steps process

Is exactly the same of the one "Incremental with compression", but instead doing a file copy issue the commands:
on the server:
cat /home/mysql/backup/incremental_2013_01_14_12_05.gz | nc -l 6666| pv -rtb
On the client:
nc 192.168.0.3 6666|pv -trb > /home/mysql/recovery/incremental_2013_01_14_12_05.gz

Conclusion

I think it could make sense to use NetCat ONLY in very specific cases, and only developing solid scripts around it, including in them:
  • status checks of the backup operation
  • list of the transmitted files
  • LSN position validation
  • network status/monitor during the operations
In short a possible nightmare.

Check lists

Simple backup

[] Check binary version
[] Check binaries are accessible in the PATH and accessible
[] Assign correct user/password in MySQL for backup user
[] Create or check backup data destination folder
[] Check my.cnf for datadir and be sure is pointing in the right place
[] Execute backup
[] Apply logs

Simple restore

[] be sure mysql server is down
[] remove / move data from original directory
[] remove / move ib_logs from original directory
[] run innobackupex --copy-back
[] check file permissions for mysql
[] start mysql
[] check the mysql log for error
[] log in and check for data.

Backup with Stream and compression

[] Check binary version
[] Check binaries are accessible in the PATH and accessible
[] Assign correct user/password in MySQL for backup user
[] Create or check backup data destination folder
[] Check my.cnf for datadir and be sure is pointing in the right place
[] Check for Pigz presence and version
[] Execute backup

Restore from Stream on a different machines (slave)

[] be sure mysql server is down
[] remove / move data from original directory
[] remove / move ib_logs from original directory
[] copy over the compress file
[] expand the backup in a safe directory
[] run innobackupex --copy-back
[] check file permissions for mysql
[] check that server WILL NOT restart the slave process on start
[] start mysql
[] check the mysql log for error
[] log in and check for data.
[] take the master log position
[] check for slave process information
[] apply new binary log position
[] restart slave
[] check slave status

Incremental Backup with Stream and compression

[] Check binary version
[] Check binaries are accessible in the PATH and accessible
[] Assign correct user/password in MySQL for backup user
[] Create or check backup data destination folder
[] Check my.cnf for datadir and be sure is pointing in the right place
[] Check for Pigz presence and version
[] Check for LSN file postion in xtrabackup_checkpoints
[] Assign the LSN to the "incremental-lsn" parameter
[] Be sure that the --extra-lsndir parametr is present and pointing to an existing directory
[] Execute backup

Incremental Restore from Stream on a different machines or slave

[] be sure mysql server is down
[] remove / move data from original directory
[] remove / move ib_logs from original directory
[] copy over the compress files
[] validate the chronological order from the BASE to the last increment
{loop for each file set}
[] expand the backup in a safe directory one a time
[] be sure that you apply log with "--apply-log --redo-only" parameters every time
[] be sure you always have the correct destination directory set (BASE set)
[] remove the incremental once apply
{loop end}
[] run innobackupex --apply-log on the BASE set
[] remove IB_log files
[] copy files to destination directory
[] check file permissions for mysql
[] check that server WILL NOT restart the slave process on start
[] start mysql
[] check the mysql log for error
[] log in and check for data.
[] take the master log position
[] check for slave process information
[] apply new binary log position
[] restart slave
[] check slave status


{joscommentenable}




Ultimo aggiornamento Domenica 18 Agosto 2013 17:23
 
MySQL Ndb & MySQL with Galera why we should not compare them. PDF Stampa E-mail
Scritto da Marco Tusa   
Domenica 16 Dicembre 2012 22:37

Overview

In the last few months, we have seen more and more discussion on the use of Galera solution for MySQL Clustering.

I have being one of those that had heavily test and implement Galera solution, actually with quite good results and I have also presented SOME of them at Oracle Connect.

On the other side I have be work with MySQL NDB for years (at least from 2007) at many customers site, from simple to complex setups.

So also if I cannot consider myself as mega expert, I think to have some good experience and insight on both platform.

The point here is that I was not happy in reading some articles comparing the two, not because the kind of tests, or results.

Not because I prefer this or that, but simply because, from my point of view, it does not make any sense to compare the two.

We can spend pages and pages in discussing the point here, but I want try to give a simple generalize idea of WHY it makes no sense in few lines.

NDB brief list

  • NDB is not a simple storage engine and can work independently, MySQL is “just “ a client.
  • NDB is mainly an in memory database and also if it support table on disk the cost of them not always make sense.
  • NDB is fully synchronous, no action can be returned to client until transactions on all nodes are really accepted.
  • NDB use horizontal partition to equally distribute data cross node, but none of them has the whole dataset (unless you use one node group only, which happens ONLY when you don’t know how to use it).
  • NDB Replicate data for a specific factor, which is the number of replica, and that replication factor will not change with the increase of the nodes number.
  • Clients retrieve data from NDB as whole, but internally data is retrieve by node, often use parallel execution. (Not going in the details here on the difference between select methods like match by key, range, IN option and so on).
  • NDB scale by node group that means it really scale in the Dataset dimension it can manage and operation it could execute, and it really scale!

Galera brief list

  • Galera is an additional layer working inside the MySQL context.
  • Galera require InnoDB to work.
  • Galera offer “virtually synchronous” replication.
  • Galera replicate the full dataset across ALL nodes.
  • Galera data replication overhead, increase with the number of nodes present in the cluster.
  • Galera replicate data from one node to cluster on the commit, but apply them on each node by a FIFO queue (multi thread).
  • Galera do not offers any parallelism between the nodes when retrieving the data; clients rely on the single node they access.

So why they cannot be compare?

It should be quite clear that the two, are very different from starting from the main conceptualization, given NDB is a cluster of many node groups with distribute dataset, while Galera is a very efficient (highly efficient) replication layer.

But just to avoid confusion:

  1. NDB does data partition and data distribution with redundant factor.
  2. Galera just replicate data all over.
  3. NDB apply parallel execution to the incoming request, involving more node groups in data fetch.
  4. Galera is not involved at all in the data fetch and clients need to connect to one node or more by themselves, means application require managing parallel request in case of need.
  5. In NDB the more node groups you add the more you get in possible operation per second and data archived/retrieved. 
  6. In Galera the more nodes you add, the more overhead you generate in the replication, so more data will require to be “locally” commit by the replication layer, until when the number of nodes and operations executed on them will compromise the performance for each node.

Conclusion

NDB Cluster is a real cluster solution, design to internally scale and to perform internally all the required operation to guarantee high availability and synchronous data distribution.

Galera is a very efficient solution to bypass the current inefficient mechanism MySQL has for the replication.

Galera allow to create a cluster of MySQL nodes, in virtually synchronous replication. This with almost zero complexity added on the standard MySQL management.

Never the less the obtaining platform is composed by separate nodes, which for the good or the bad is not a system of distributed data.

 

Given that, the scenario where we can use Galera or NDB are dramatically different, trying to compare them is like comparing a surfboard with a snowboard.

I love them both, and honestly I am expecting to see Galera deployment to dramatically increase in 2013, but I am still respecting my motto “use the right tool for the job”.

 

Let us try to make our life easier and avoid confusions.

Happy MySQL to all!!

Ho-ho-ho

 

{joscommentenable}

Ultimo aggiornamento Domenica 18 Agosto 2013 17:24
 
Another day at MySQL connect 2012 PDF Stampa E-mail
Scritto da Marco Tusa   
Lunedì 01 Ottobre 2012 08:14

Today was another good day, but I had a problem with one presentation and keynote.

the "Big Data is a Big Scam: Most of the Time". There is something that is not convincing me in the architecture,

and is mainly base on the lack of numbers and match between requirements and implementation.

 

The presentation was interesting from some aspects, and the tile approach is probably a good one.

 

I have enjoy the general idea and I did found awesome the bravery of setting up a circular replication cross geographic regions.

 

Never the less I remain bewildered by the tiles approach numbers, probably because the presentation was missing of few additional information.

 

Just to be more clear I was trying to understand better the real number and how works.

 

As for example if each tile is compose by 1 MySQL node, and 2 data node, and given that this solution use Xtra large EC2 with 68GB ram. I can assume that,

we have approximately 65 GB data (given some overhead for buffers and so on) for the current Node Group on this tile.

From the description I have also understood that the number of tails implemented, is different by region.

 

Given that the number of Node Group is different and by direct consequence the memory available as well, and the MySQL SQL node as well.

 

This raise an immediate question, are we having the same data set replicated world wide?

 

Or are we having a different sets?

 

Also,

each Node Group can host ~69GB, the maximum number of data node in 7.2 is still 48 (http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-ndbd-definition.html), which means that the maximum size that can be served is of 1.3TB for the whole cluster.

 

Given during the presentation my understanding was that the data set is of 100TB, where are located, now or in the future, the remaining 98.7TB? Keeping in mind that I asks if table on disks where used and answer was NO.

Then I have another question-mark infront of me.

If the set of tiles in US is compose of 20 tiles, (so 20 MySQL SQL Nodes, 40 Data nodes and so on), and it set like this because it needs to sustain the traffic mainly it requires that setting for "calculation power" needs.

I assume (probably wrong) that this is due to the number of request per second coming in.

 

Given we have circular replication in the architecture, and given we have different numbers of tiles, and by consequence of MySQL nodes, HOW can a 8 MySQL SQL Node setup (say in Australia) sustain the traffic coming in from a 20 MySQL SQL nodes setup?

 

I am sure, we are missing something in the presentation in terms of real numbers.

 

Being a MySQL NDB lover, I would really like to have more information and details, given that what I get from the presentation was not enough for me.

 

{joscommentenable}

Ultimo aggiornamento Domenica 18 Agosto 2013 17:24
 
MySQL 5.6.6 not compile on Mac PDF Stampa E-mail
Scritto da Marco Tusa   
Domenica 22 Luglio 2012 22:01

Today, I had the need to debug mysql, given that I download the code and I try to compile on my MacBook pro.

Surprise I got the following issue:

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
-- Performing Test HAVE_IB_ATOMIC_PTHREAD_T_GCC - Success
-- Check size of pthread_t
-- Check size of pthread_t - done
-- Performing Test HAVE_PEERCRED
-- Performing Test HAVE_PEERCRED - Failed
-- GTEST_LIBRARIES:optimized;/Users/marcotusa/mysql_work/source/gtest-1.6.0/test;debug;/Users/marcotusa/mysql_work/source/gtest-1.6.0
-- Configuring incomplete, errors occurred!
tusamac:mysql-5.6.5-m8 marcotusa$ cmake . -G Xcode -DCMAKE_INSTALL_PREFIX:PATH=/Users/marcotusa/mysql_work/template/mysql-5.6.5
 -DCMAKE_USE_RELATIVE_PATHS=1 -DDEBUG_EXTNAME=1 -DDISABLE_SHARED=OFF -DENABLE_DOWNLOADS=1 -DWITHOUT_SERVER=0
 -DWITH_ARCHIVE_STORAGE_ENGINE=1 -DWITH_BLACKHOLE_STORAGE_ENGINE=1 -DWITH_DEBUG=1 -DWITH_EMBEDDED_SERVER=0
 -DWITH_EXTRA_CHARSETS=all -DWITH_FAST_MUTEXES=1 -DWITH_FEDERATED_STORAGE_ENGINE=1 
-DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_LIBEDIT=1 -DWITH_LIBWRAP=1
-DWITH_PARTITION_STORAGE_ENGINE=1 -DWITH_PERFSCHEMA_STORAGE_ENGINE=1 
-- MySQL 5.6.5-m8
CMake Error at configure.cmake:196 (LIST):
  list sub-command REMOVE_DUPLICATES requires list to be present.
Call Stack (most recent call first):
  CMakeLists.txt:262 (INCLUDE)
 

 

Obviously on linux all works fine.

After having fight a little by myself, I start to look around and I have found that bug 65050 was talking about it.

But for version 5.5. and is marked as Critical.

 

Anyhow I review the patch describe there, and look in the file configure.cmake.

Obviously the lines were different but once identify the right position: line 196 in 5.6.6 m8.

Replacing the line

196
LIST(REMOVE_DUPLICATES CMAKE_REQUIRED_LIBRARIES)

 

with 

196
197
198
  IF(CMAKE_REQUIRED_LIBRARIES)
   LIST(REMOVE_DUPLICATES CMAKE_REQUIRED_LIBRARIES)
  ENDIF()

 

I run again the cmake and finally I got :

1
2
3
4
5
6
7
8
9
10
11
tusamac:mysql-5.6.5-m8 marcotusa$ cmake . -G Xcode -DCMAKE_INSTALL_PREFIX:PATH=/Users/marcotusa/mysql_work/template/mysql-5.6.5 
-DCMAKE_USE_RELATIVE_PATHS=1 -DDEBUG_EXTNAME=1 -DDISABLE_SHARED=OFF -DENABLE_DOWNLOADS=1
 -DWITHOUT_SERVER=0 -DWITH_ARCHIVE_STORAGE_ENGINE=1 -DWITH_BLACKHOLE_STORAGE_ENGINE=1 
-DWITH_DEBUG=1 -DWITH_EMBEDDED_SERVER=0 -DWITH_EXTRA_CHARSETS=all -DWITH_FAST_MUTEXES=1
 -DWITH_FEDERATED_STORAGE_ENGINE=1 -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_LIBEDIT=1
 -DWITH_LIBWRAP=1 -DWITH_PARTITION_STORAGE_ENGINE=1 -DWITH_PERFSCHEMA_STORAGE_ENGINE=1 -DWITH_UNIT_TESTS=1
-- MySQL 5.6.5-m8
-- C warning options: -Wall -Wextra -Wunused -Wwrite-strings -Wno-strict-aliasing -Werror -Wdeclaration-after-statement
-- C++ warning options: -Wall -Wextra -Wunused -Wwrite-strings -Wno-strict-aliasing -Werror -Wno-unused-parameter -Woverloaded-virtual
-- Successfully downloaded http://googletest.googlecode.com/files/gtest-1.5.0.tar.gz to /Users/marcotusa/mysql_work/source/mysql-5.6.5-m8/source_downloads
-- GTEST_LIBRARIES:gtest
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/marcotusa/mysql_work/source/mysql-5.6.5-m8
tusamac:mysql-5.6.5-m8 marcotusa$ 
 

 

Easy and clean.

 

Thank you to : Mike McQuaid for identifying the issue and submit the patch

And shame on MySQL/Oracle for having Mac OS/X so out of the radar.

{joscommentenable}

Ultimo aggiornamento Domenica 18 Agosto 2013 17:26
 
Portable table space part III or We can do it (with partition) PDF Stampa E-mail
Scritto da Marco Tusa   
Martedì 08 Maggio 2012 04:30

Portable table space part III or We can do it (with partition)

First of all, I have to say that as usual I did the three investigation, because I had fun.

MySQL is still fun for me, and also if I am not touching the magic topic of the moment like SSD,

replication and so on, I enjoy what I was doing, and I see how I can save my customer data, with this.

Second, I thanks again the people who helps me in understanding better MySQL, in this case Mattias Jonsson, who points me in right direction.

 

The Solution

The solution is a little bit cumbersome, but it works.

I have assume that:

  • a customer have a data-set which contains a partitioned table
  • we can have short period of read only data

To use the DISCARD tablespace on a partition the process is:

  1. - LOCK the table with partition and the final table where I will put the data
  2. - EXCHANGE the table with the partition
  3. - Flush the table
  4. - save the files
  5. - EXCHANGE AGAIN table with partition
  6. - reload the record IF ANY from the table I have move back to the partition (delta insert during the first exercise)
  7. - Flush again
  8. - save files

To recover do the usual procedure but against the table then EXCHANGE it with the partition.

It is obvious that more time pass more DATA will differ from the saved tablespace and the live data set, so longer it will take to reload the data.

Anyhow below the full process in detail, have fun.

Process details

INITIAL data set

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
(root@localhost) [test_tablespace1]>select * FROM partD;
Empty SET (0.00 sec)
(root@localhost) [test_tablespace1]>show CREATE TABLE tbtest_part \G
*************************** 1. row ***************************
TABLE: tbtest_part
CREATE TABLE: CREATE TABLE 'tbtest_part' (
'a' int(11) NOT NULL DEFAULT '0',
'b' char(3) DEFAULT 'AAA',
'dat' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
'part' int(11) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1
/*!50100 PARTITION BY LIST (part)
(PARTITION partA VALUES IN (1) ENGINE = InnoDB,
PARTITION partB VALUES IN (2) ENGINE = InnoDB,
PARTITION partC VALUES IN (3) ENGINE = InnoDB,
PARTITION partD VALUES IN (4) ENGINE = InnoDB) */
1 row IN SET (0.02 sec)
(root@localhost) [test_tablespace1]>select COUNT(*) FROM tbtest_part WHERE part=4;
+----------+
| COUNT(*) |
+----------+
| 1960864 |
+----------+
1 row IN SET (0.95 sec)
(root@localhost) [test_tablespace1]>
(root@localhost) [test_tablespace1]>create TABLE parD SELECT * FROM tbtest_part LIMIT 0;
 

 

I have to exchange the PARTITION and then use the delta to fill the difference.

lock tables tbtest_part write, partD write;   then run the  ALTER TABLE tbtest_part EXCHANGE PARTITION partD with table partD;

 

 

1
2
3
4
5
6
7
(root@localhost) [test_tablespace1]>ALTER TABLE tbtest_part EXCHANGE PARTITION partD WITH TABLE partD;
Query OK, 0 rows affected (0.32 sec)
(root@localhost) [test_tablespace1]>unlock TABLES;
Query OK, 0 rows affected (0.00 sec)
(root@localhost) [test_tablespace1]>FLUSH TABLE partD WITH READ LOCK;
Query OK, 0 rows affected (0.00 sec)
 

 

Time to save the files in the backup directory

 

root@000033:/home/mysql/instances/my56testm/data/test_tablespace1# ll
total 1407388
drwx------ 2 mysql mysql 4096 2012-05-07 17:16 ./
drwxr-xr-x 5 mysql mysql 4096 2012-05-07 11:41 ../
-rw-rw---- 1 mysql mysql 65 2012-04-27 14:28 db.opt
-rw-rw---- 1 mysql mysql 523 2012-05-07 17:16 partD.cfg
-rw-rw---- 1 mysql mysql 8636 2012-05-07 11:41 partD.frm
-rw-rw---- 1 mysql mysql 96468992 2012-05-07 11:45 partD.ibd
-rw-r----- 1 mysql mysql 476 2012-05-05 19:06 tbtest_100MB.cfg
-rw-r----- 1 mysql mysql 8606 2012-05-05 19:06 tbtest_100MB.frm
root@000033:/home/mysql/instances/my56testm/data/test_tablespace1# cp partD.* /home/mysql/backup/

 

 

Let assume that during a period of time N the table will get additional 600 records

such that when we bring it back we have to reinsert them.

 

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
(root@localhost) [test_tablespace1]>ALTER TABLE tbtest_part EXCHANGE PARTITION partD WITH TABLE partD;
Query OK, 0 rows affected (1.46 sec)
(root@localhost) [test_tablespace1]>
(root@localhost) [test_tablespace1]>select count(*) FROM tbtest_part WHERE part=4;
+----------+
| count(*) |
+----------+
| 1960864 |
+----------+
1 row IN SET (0.94 sec)
(root@localhost) [test_tablespace1]>select count(*) FROM partD;
+----------+
| count(*) |
+----------+
| 600 |
+----------+
1 row IN SET (0.00 sec)
(root@localhost) [test_tablespace1]>insert INTO tbtest_part SELECT * FROM partD;
Query OK, 600 rows affected (0.04 sec)
Records: 600 Duplicates: 0 Warnings: 0
(root@localhost) [test_tablespace1]>select count(*) FROM tbtest_part WHERE part=4;
+----------+
| count(*) |
+----------+
| 1961464 |
+----------+
1 row IN SET (0.94 sec)
 

 

Good all data recover, now we must do something to preserver our set, we must save again the partD table to preserve the delta

we miss during the EXCHANGE.

We must also rename the table's files.

 

 

root@000033:/home/mysql/instances/my56testm/data/test_tablespace1# cp partD.ibd /home/mysql/backup/parDdelta.ibd
root@000033:/home/mysql/instances/my56testm/data/test_tablespace1# cp partD.frm /home/mysql/backup/parDdelta.frm
root@000033:/home/mysql/instances/my56testm/data/test_tablespace1# cp partD.cfg /home/mysql/backup/parDdelta.cfg

 

 

 

Assume that we have a disaster and data is lost, we will simulate truncating the partition

 

1
2
3
4
5
6
7
8
9
10
(root@localhost) [test_tablespace1]>alter TABLE tbtest_part TRUNCATE partition partD;
Query OK, 0 rows affected (0.27 sec)
(root@localhost) [test_tablespace1]>
(root@localhost) [test_tablespace1]>select count(*) FROM tbtest_part WHERE part=4;
+----------+
| count(*) |
+----------+
| 0 |
+----------+
1 row IN SET (0.00 sec)

 

 

Help my data is lost!!!

And assume I still have insert until I put the table on reed only (hoping I can doit at elast for short time):

 

1
2
3
4
5
6
7
(root@localhost) [test_tablespace1]>select count(*) FROM tbtest_part WHERE part=4;
+----------+
| count(*) |
+----------+
| 200 |
+----------+
1 row IN SET (0.00 sec)

 

 

Now we have to:

  1. re-attach the old dataset to a new table and exchange with the partitioned table.
  2. save new data in a mew table
  3. re-attach the tablespace to partionD and reload the data;
  4. reload the delta between crash and new inserts

let's go:

1) Do the different steps to have the original data back

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
(root@localhost) [test_tablespace1]>create TABLE partD SELECT * FROM partD_temp LIMIT 0;
Query OK, 0 rows affected (0.27 sec)
Records: 0 Duplicates: 0 Warnings: 0
(root@localhost) [test_tablespace1]>ALTER TABLE partD DISCARD tablespace;
Query OK, 0 rows affected (0.03 sec)

Copy old TABLE space FROM my BACKUP area TO DATA directory
(root@localhost) [test_tablespace1]>ALTER TABLE partD IMPORT tablespace;
Query OK, 0 rows affected (30.94 sec)
(root@localhost) [test_tablespace1]>ALTER TABLE tbtest_part EXCHANGE PARTITION partD WITH TABLE partD;
Query OK, 0 rows affected (1.43 sec)
(root@localhost) [test_tablespace1]>select count(*) FROM tbtest_part WHERE part=4;
+----------+
| count(*) |
+----------+
| 1960864 |
+----------+
1 row IN SET (0.93 sec)

 

 

Finally save the DELTA of the data in a different table to do not overwrite it.

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(root@localhost) [test_tablespace1]> RENAME TABLE partD TO partD_AFTER_EMERGENCY;
Query OK, 0 rows affected (0.08 sec)
2) reload DATA of the partD delta (the 600 entries we saved before)
(root@localhost) [test_tablespace1]>create TABLE parDdelta SELECT * FROM tbtest_part LIMIT 0;
Query OK, 0 rows affected (0.29 sec)
Records: 0 Duplicates: 0 Warnings: 0
(root@localhost) [test_tablespace1]>ALTER TABLE parDdelta DISCARD tablespace;
Query OK, 0 rows affected (0.04 sec)
(root@localhost) [test_tablespace1]>ALTER TABLE parDdelta IMPORT tablespace;
Query OK, 0 rows affected (2.95 sec)
3) reload the DATA FROM the reattached tablespace
(root@localhost) [test_tablespace1]>select count(*) FROM parDdelta;
+----------+
| count(*) |
+----------+
| 600 |
+----------+
1 row IN SET (0.00 sec)
(root@localhost) [test_tablespace1]>insert INTO tbtest_part SELECT * FROM parDdelta;
Query OK, 600 rows affected (0.04 sec)
Records: 600 Duplicates: 0 Warnings: 0

 

 

4) reload the missed delta after the EMERGENCY

 

1
2
3
4
5
6
7
8
9
10
(root@localhost) [test_tablespace1]>insert INTO tbtest_part SELECT * FROM partD_AFTER_EMERGENCY;
Query OK, 200 rows affected (0.06 sec)
Records: 200 Duplicates: 0 Warnings: 0
(root@localhost) [test_tablespace1]>select count(*) FROM tbtest_part WHERE part=4;
+----------+
| count(*) |
+----------+
| 1961664 |
+----------+
1 row IN SET (0.94 sec)

 

 

Done the whole dataset is rebuild.

 

Conclusion

It is possible to have per PARTITION tablespace backup using DISCARD/IMPORT and EXCHANGE.

It is also possible to recover any delta that the application will insert in the meanwhile.

It is obvious that during th process the dataset will be inconsistent, so the best thing to do is to avoid inserts.

Copy the specific tablespace will be much easier and faster, then reloading from a full backup, if the issue is limited to a single tablespace or partition.

What I have done here manually must be automize using a script, that will simplify the process and make it much faster.

 

I hope you enjoy the whole as I did during the tests.

 

 

 

{joscommentenable}

Ultimo aggiornamento Domenica 18 Agosto 2013 17:26
 
«InizioPrec.123456789Succ.Fine»

JPAGE_CURRENT_OF_TOTAL
 

Who's Online

 25 visitatori online