### Related Items

08
Feb
2013
 MySQL and drop packet
 Scritto da Marco Tusa

# Overview

Last night a customer call us because was having issue on the server and loss of performance on the MySQL server.
When I join the bridge I ask to the customer a quick report of what was his experience and concern so far.

Luckily all the participants were technically skilled and team was compose by, SAs, DBAs, Team leader, so I was able to have a good overview in short time.
Mainly there were two fronts, one was the problem on the server in the network layer, the other in MySQL that was not able to manage efficiently the number of thread opening requests.

The machine has a single NIC, Storage attach by fibre channel, 8CPU hyper threading, 64GB RAM and finally heavy usage of NFS.

The same server was in the past using the volumes on the NFS also for MySQL, but now everything was moved to the attached storage.

As said the issue was that NIC was reporting drop packet wand MySQL was having issue to manage the number of threads, the last ones were between 200 - to 1000 connection requests.
As active threads the server was managing 200-300 threads, which was not enough.

I start reviewing the server and NIC issue, talking with the SAs they report that the NIC Receive buffer, was already set to maximum of 4096k.

So starting the investigation from there I review the back_log net.ipv4.tcp_max_syn_backlog, and all the other parameters related to TCP buffer:
 1 2 3 4 5 6 7 8 9  CURRENT TCP buffer setting ------------------------------ net.ipv4.tcp_mtu_probing = 0 net.core.rmem_max = 131071 net.core.wmem_max = 131071 net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 ------------------------------  

The settings were misconfigured given that the tcp value cannot override the core values.

As such the settings for the tcp auto tuning were invalid for the max limit.

Given those values were not correct for a machine supporting high traffic I suggest:
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16  Suggested TCP buffer settings ------------------------------ #TCP max buffer size net.core.rmem_max = 16777216 net.core.wmem_max = 16777216   #Linux auto-tuning TCP buffer net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216   #length of the processor input queue net.core.netdev_max_backlog = 30000   #default congestion control is htcp net.ipv4.tcp_congestion_control=htcp  

About htcp see the reference to the document explaining in details the algorithm.
From the mysql point, I review few parameters that would have a direct relation with the Threads.
 1 2 3 4 5 6 7  MySQL change ----------------------------------------------- back_log = 1024 thread_cache_size = 512 thread_stack = 256K wait_timeout = 600  

I decide to set value of backlog as the maximum queue we have seen, move the value of the thread_cache_size from 8 to 1/4 of the max number of connection,
then few correction given wait_timout was set as default and thread_stack was set as for 32bit machines.

When we apply the values, I was expecting to see the drop packet issue solve, instead MySQL was managing much better the incoming connection, but the drop packet were still there.
Also the bad network behaviour, was preventing to the data to flow as it was suppose to be.

We then focus on why this was happening reviewing all the changes applied.

After few investigations and researches the customer, realize that the value for the receive window on the NIC, was not really applied,
because also if it is declared as dynamic value by Cisco, the facto it require a reboot of the machines.

We reboot it, and the NIC was now working properly. Data was floating fine with no drop packet.
MySQL was managing the incoming thread efficiently, but I notice, after the server warm-up, that the performance were still not optimal.

So doing some other tuning, I set thread_cache_size to 1024 paring the back_log number,
at this point MySQL was managing the incoming new request very efficiently, we had a jump to of Threads_cached to 850 with obvious flotation between 8 threads up to the maximum,
Threads_created the same and then just a bit larger then the maximum number of created connections, and finally Threads_running jump from 200 - 300 to 600 - 700.

# Conclusion

The drop packet is a sign of insufficient buffer, either from the NIC or the TCP, remember to review the parameters related.
"Surprising" also if the thread creation process was alway declare as "easy" and "light" a more aggressive thread settings makes MySQL act more properly.

# Reference

Он выстрелит и "Игры для нокиа смартфон скачать"сможет отправиться домой, а семейные дела, может, удастся отложить еще на годик.

Солдаты, до сих "Книга пять языков любви"пор во время похода вы не имели возможности "Ангелочки картинки скачать"сообщить вашим близким, которых вы оставили, свои "Скачать модели оружия"адреса, дабы ваши далекие знали, куда вам писать, "Скачать быков вячеслав любимая моя"и дабы вам могли доставить радость письма ваших дорогих покинутых.

Но я твердо верю "Рыбалка скачать онлайн"в вашу выносливость и в вашу "Скачать книгу вильмонт"силу воли.

С такого расстояния белый его точно заметит.

Никогда не сдавайся, "Скачать оперу старая версия"назидательно заметил Джиро Исудзу, хотя полковник "Герои мечей и магии скачать"уже вряд ли мог его услышать.

Рев ветра "Проги на андроид скачать"в ушах исчез, как будто у него внезапно пропал слух.

Оба "Скачать игру на флешку"из кожи вон лезли, чтобы добиться "Radeon 9600 pro драйвер"своего.

Я тебя обожаю, Кэтлин О'Доннел!

Я подумал, там "Хиты 70 80 скачать"есть ругательства, которые я забыл.

И все это "Штамп чертежа размеры"благодаря Мастерам Синанджу.

Священник появлялся "Скачать футбольное финты"во время казни взбунтовавшихся солдат; священника "Скачать все альбомі киш"можно было видеть и на казнях чешских легионеров.

Ultimo aggiornamento Lunedì 29 Aprile 2013 07:22

24
Gen
2013
 I dont ...
 Scritto da Marco Tusa

22
Gen
2013
 Some thought on our work, from a different perspective
 Scritto da Marco Tusa

Contents[Hide]

# 1. Introduction

Recently I had some free spare time that I used to read, think, process and analyze few ideas and work on my own projects.

That was great for me because I had the chance to develop new tools and to review few concept related to work. So I had the chance to focus on the ideas behind the procedures or "how-to", including reviewing what I am doing at work, from different angles and prospective.

One of the different or better to say modified prospective, was the outcome of a mental process started with a reading.

Reading that I initially considered a waste of money, time and mind effort.

This because the topic discussed, and the way the topic was presented, is something that I had the chance to study when I was in school starting from secondary. In fact the topic of "Critical thinking" is, or I should say was, included in the school programs in our learning path associate to "Logic", "Grammar" and "Philosophy".

So when I read the books I commit the crime of assumptiveness, feeling also bore while reading, until the moment the book was covering the topic related to "Decision Making". In that chapter the writer was underlining how easy is for us to be caught in trap by our own knowledge and ideas.

I did stop to read, close the book went doing something else, and then I try to empty my mind. Only at that point I realize I was not reading at all, better my eyes were, some part of my brain was, but my mind and my attention were not, this because I had categorize the book from the initial chapters in the erroneous way.

So I take a glass of wine, had some time, good music and open the book again from the start. This time the book was presenting me a different scenario and prospective, it drives my thought through several mind paths and brings me to review some assumptions. At this point I was able to make some parallels with what is our/mine day-to-day activity in life and work. It was funny for me to discover how some personal best practices, fit perfectly in a well categorize universal model.

There was no magic, that is true, but what was good and interesting, was how "School Training" can forge your way of doing in an instinctive way, but also how the instinctive action path, can be transform and express in few, clear, universal and simple to read steps.

In particular I saw a good parallel with two critical areas of our work; the credibility of the source and the process of the decision-making.

The rest of the writing is a summary, a go through few points I have identify as relevant, and that I have see covering some critical grey areas.

I am aware this is just a small part of the picture, and as usual I am open to discussion and comments, and I will be more then happy if that will happen, actually this will mean that I have reach my target.

# 2. Credibility of the Source

In our work, as well as in many others, having a good credibility is not a plus but a must. Being credible as a company or as a single person, is not coming from free, and is a process that could takes years to build and days to get destroy.

The credibility is not only the result of "best practices" or "how to", but is also the result of a correct approach and process in what we do, how we do it and how we decide to do things (see after the decision-making section).

Whenever a customer will come to us, for an advice or help, he will ask to himself some questions, questions that we should answer or the customer will redirect his attention to someone else.

Those questions are:

• Do they (us) have the relevant expertise (experience, knowledge and if needed formal qualification)?
• Do they have the ability to observe accurately (eyesight, hearing, proximity to event, absence of distraction, appropriate instruments, skills in using instrument)?
• Does their reputation suggest that they are reliable?
• Do they have any interest or possible bias?
• Are them claiming and providing evidences of knowledge about MY context?
• Are they providing direct expertise?
• Is they level of expertise base on direct experience?
• Is what they say support by evidence and logic pattern?
• Are other sources consistent?

Answer to all the above, as said, is not something you can achieve with limited or superficial effort, it instead require an extensive and constant shift in mentality, and require some well define ideas and behaviour. My interpretation is the following:

• Always be "super partes", also avoid as much as possible to follow ephemeral trends, like the use and abuse of the "magic" term of the year, often used by others to show they capacity to be on the "trend". Unfortunately be there very often means doing without knowing. Be more conservative and analytic is the right things to do when responsible of other people.
• Be under constant training and education, perform extensive tests, and provide public evidence of our conclusion and analysis. Publish few but focus blogs.
• Avoid blog about everything, and avoid generalization, that will create more noise and confusion, yes you will be there, but as chatter not as an expert.
• When claiming about something, provide evidence and a well-documented reasoning path to support your claim.
• Always put the claim in a clear defined context, and if possible and available include the references to others reasoning and/or similar evidence and sources.
• Whenever possible try to be or use a direct source, like provide the test you have done yourself, or review and repeat the tests done by others to validate them.
• Never use other source material as yours, instead document them and contextualize them providing credit to the source. Again double check other source conclusion and provide evidence of your process.
• Whatever evidence or conclusion you will provide, it needs to have an exact match with the discussed topic, avoid generalization. Assumption can be good only if supported by good and documented reasoning.
• Do not rush, this is not a race, do not send out an answer or a comment without having the time to think on it. If possible, review it several times, and cover your reasoning also with others, this to be sure you have cover all the possible areas of uncertainly, and if you still see them, declare them.

I will be more then happy to have discussion on the above points, and if possible to extend them including more helpful suggestions.

# 3. Decision-making process

As mention previously the other point is related on how we take our own decisions, and how we evaluate other people conclusion/reasoning/motivation.

In our work we are constantly call to take decisions, some of them are very simple ones, and we can take action with very limited thinking, but others could be much more complex and could require significant effort from our side, more time and processing to efficiently evaluate what will be the right decision.

Unfortunately very often we are affected by at least one of the following bad behaviour:

• We do not give us enough time to think.
• We see a possible fit on a though and we remain there not giving us the space to evolve.
• We do not process all the possible alternatives to/of the problem, and we do not develop more then one solution.
• We do not evolve our solution/action into a clear path of possible consequences.
• We sometime forget what is relevant for us and how much this can impact on our judgment.
• We are emotionally involved and it affects the process and the decisions.
• We just do what our Boss is saying to do.
• Other recommendations influence us without applying analytical thinking.

Going through the above points, trying to clarify and to see what we can do to prevent them.

• Time, time is relevant and often we have to take some decision fast, but thinking require time, time to take information, time to analyze them, time for the reasoning. The process should not be compromise by our rush, because results will be affected and our decisions can be imprecise (if lucky) or completely wrong, not only it could happen to take a wrong decision, but when this happen because rush, we do not have a good reasoning to support and justify our mistake, in short there will not be a learning lesson, only the mistake.
• How often have we feel in love with our ideas, and not ready to divorce from them? Too often we must admit it, instead we should be able to go beyond and process all the possible options. We should keep our mind open and listen to other external suggestion but always applying analytical process.
• When I was a kid I learn that "each action include/imply a reaction". Before performing any action, before apply what we think is correct in our decision, we should carefully think "What will happen next?". We should analyze the actions, and have a good level of understanding of what will be the path of events that our actions will generate, and be ready for possible unexpected bad behaviour.
• In our job information about what is going on is everything. We should never stop to dig more, and get better understanding. Never consider the outcome of some tools/script enough for our analysis, taking their results as given without applying an analytical review. We should stop only when we are really confident that we cannot get more relevant information, and if possible we should ask to a trusted source to compare what we got, to see if we have miss anything.
• Sometime we forget that we have personal commitments, those could affect our judgment. For example, if we are fully focus on open source, it could become almost automatic for us to skip the evaluation of a non-open source solution. Or if we are Linux fundamentalist just to have to approach windows server, could drive us to have a not objective approach to the problem. Again we must keep our mind open and process the problem by analytical steps, not considering the preconceptions in our thinking, but be able to filter them out and have an objective mind process.
• How many times we have found that customer so annoying? His reiterate questions where lacking of any sense, and is not some time his behaviour to be so close to be offensive? On the other hand this other customer is really nice, he gives you a lot of credit, he has a good understanding of the effort you are doing to keep his environment in good shape. Can you honestly say that you have always gives to the two the same "time and attention"? This is a fact, it is in the human nature, to be more careful and nice comes easier with the ones that are nice with us. But this is not correct, we should always apply the same time/effort/reasoning independently to the customer behaviour. The reasoning is the point not our feeling. Understanding it and be able to mange it is a matter of be more or less professional.
• Do not follow the boss or others advice, direction blindly. We must listen carefully to anyone, we should evaluate what they have to say and objectively extract whatever is good from their suggestion or recommendation. But never accept it without our own thinking/reasoning; also it will be appropriate to share with them our process step by step, before getting to the conclusion. This will help us in learning from each other work, and will provide advantage to everyone also reducing the chance of mistakes.

Summarizing we should ask ourselves the following before, during and after having done our reasoning for a decision:

1. What make this decision necessary? What is the objective?
2. What I am going to recommend, and on what basis?
3. What other possible alternatives exists, which one is the more realistic feasible, which one the more innovative?
4. What are the possible consequence of my decision, and how likely they are going to happen?
5. If this consequence will happen what will be the relevance and how we can manage them?
6. Comparing different solution, which one will be the best to mitigate negative effect?
7. How I can transform my decision into an action reducing to minimum the risk of bad behaviour or mistakes?

# 4. Conclusion

In the above sections, I was just trying to report in a concise and easy way, what is part of a more complex topic. I am aware that most of us do the right thing, just doing it right, but I am also confident that reporting black on white those simple points could help us to avoid mistakes, and if possible to define process and checklists that other people less conscientious then us, can follow to make their work behaviour more trustable.

# 5. Reference

Ennis , R.H Critical thinking Prentice Hall 1996

Fisher A The logic of real arguments Cambridge University press 1988

Fisher A Critical Thinking – an Introduction Cambridge University press 2001

Особенно "Скачать музыку тяжелая"дружбан с индонезийским островом на башке.

Так вот, "Скачать программу для музыки"сержант, если хочешь, чтобы знатные гости "Гонки игры бесплатно гонки на самолетах"жевали индюка за сегодняшним обедом, "И скачать фильм любовь и голуби"давай команду "Скачать клубные танцы видео урок"трогаться.

Нам нужна тихая бескровная победа.

Остров оставался на "Мультики скачать для детей"месте, перед глазами у всех, "Скачать книги юмористического фэнтези"правда совершенно пустынный, "Создание презентаций программы скачать"покинутый обитателями.

Если ты мне не "Скачать аватарку для контакта"понравишься, сучья дочь, я вышибу "Скачать песню рамштайна чернобыль"тебе зенки!

Я только забочусь о безопасности станции.

Но ""мои подозрения безусловно обманули меня.

Ноги ""его были нежны и податливы, ""как лепестки так его научили ""делать много лет тому назад.

Он втиснулся в ""тесное пространство и понял, что торопился ""не зря.

Мануэль "Скачать русскую артмани"тронул поводья и начал спускаться ""со скалы в ущелье.

Похоже, что вам ""требовалось что-то большее в работе с "Изложение 10 класс по русскому языку"этим свидетелем, Друмолой.

Не говоря уже о существенном продвижении по служебной лестнице.

Да, с помощью небольшого волшебства я решил немного прояснить твое сознание, сделать его ""более восприимчивым ""к тому, что происходит.

Когда пришло время отгрузки, они устроили революцию и не собирались выполнять соглашения и контракты старого ""правительства.

крикнул Мандор, когда на ноге Юрта образовались и уплыли бисеренки.

Так ""что у тебя есть шанс поспать и собраться с мыслями перед ""тем.

Но чаще всего нам приходится ""рассчитывать только на свои собственные силы, на лопату, кирку и ""топор.

Я начал понемногу подаваться влево, волоча за собой все сооружение.

У него были основания подозревать недобрые "Поздравление с днем рожденья видео скачать"намерения с "Torchlight 2 beta ключ"их стороны.

Лабиринт молчал, стойко охраняя мое невольное "Скачать дрова на сетевую карту"одиночество.

Здание имело деревянные перекрытия.

Священник рассказывает местным прихожанкам в рабочем "Window media player 10 скачать"клубе о Святой земле и показывает цветные слайды, а она "Музыка лолита скачать"сопровождает его "кредит под бизнес"рассказ игрой на пианино.

В небе птенчик маленький, небо вверх тормашками.

Хозяин провел много времени в Восточной "Книга откровений смотреть онлайн"Европе, где кое-что узнал о его привычках.

Но в поле его зрения появилась рука мускулистая, покрытая рыжими волосками, на запястье широкий браслет с кнопками и индикаторами, сигнальными сенсорами но даже увидев его, он не поверил в "Скачать прикол на звонок скачать"реальность образа.

Но, тем не менее, вот она перед ним, эта таинственная сила, гораздо более пугающая и одновременно почему-то гораздо "Игры лего нинзяго бесплатно"более желанная жертва, чем даже тот первый слон, которого он убил в джунглях.

Я опустил свою жертву "Драйвер mx mx 400"и обернулся.

Водителя отделяла от пассажиров тонкая стенка из дымчатого звуконепроницаемого стекла.

Красные потоки хлынули по террасам в рокочущее море.

Мои отношения "Книга вечный зов читать"с миссис Спотсворт были чисты, как слеза.

Сотруднику "Скачать алхимик на андроид"понадобилось двадцать минут, "Срочный трудовой договор образец скачать"чтобы найти карточку Римо.

Хью свернул с дороги "Песня лесоповала лесоповал скачать"и пополз среди "Скачать бесплатно торрент средь бела дня"деревьев, кустов, валунов.

Говорят, что "Скачать на телефон кости"от самой макушки до пятки "Скачать салават фатхетдинов песни"левой ноги Того Кто Ждет проходит шрам.

И если "Игра дневники вампиров скачать"это не "Красная книга черепахи"беспокоит его, то я не понимаю, почему "Лучшие песни скачать 2012"это должно беспокоить меня.

Я поманил охранников, и они ее удалили.

Она чувствует, что ты несчастен "Скачать бесплатно лучший видео проигрыватель"от того, что не обнаружил, что "Покемоны на русском скачать игру"искал.

Я Хасан ас-Сабах, основатель Ордена ассасинов, чью землю "Песня скачать день рождения"ты насилуешь своим длинным мечом.

Впервые в жизни он "Бланк кассового отчета скачать"желал чего-то, что от него ускользало.

Потом "Скачать заработало игра"он рассмеялся и подошел к Транто, чтобы помочь.

Аугсбург "Скачать cleo 4 для gta sa"устраивал Аззи еще и потому, что он давно пользовался "Драйверы для модемов скачать"славой центра колдовства.

Ожидая, что его назначат Верховным "Скачать bomfunk mc s"Главнокомандующим в предстоящей Тысячелетней Войне, он не позаботился об иной "Скачать 3d мультфильм"интересной работе для себя, и теперь оказался "Скачать выжить любой ценой"в тени, занимаясь различными мелкими, рутинными делами, которые лишь раздражали его.

Ultimo aggiornamento Sabato 18 Maggio 2013 13:48

22
Gen
2013
 Xtrabackup for Dummy
 Scritto da Marco Tusa

or summary  for lazy guys on how to use it...

Contents[Hide]

I know that a lot has being written around Xtrabackup, and good documentation can be found on the Percona web site.

Anyhow I had to write a summary and clear procedure for my teams, so I choose to share those with all, given it could provide benefit to all community.

Each major topic is associate to a checklist, that need to be follow to prevent mistakes.

# 1. Overview

Xtrabackup is a hot backup tool, that allow you to perform backup on InnoDB with very limited impact on the running transactions/operations.
In order to do this xtrabackup, copy the IBD files AND it takes information out from the REDO log, from a starting point X.
This information needs to be then apply to the datafiles, before restarting the MySQL server on restore.
In short the Backup operation is compose by 2 main phases:
1. Copy of the file
2. Copy of the delta modified from REDO log.
Another phase is the "prepare" phase where the REDO log modifications, must apply.
This phase can be done as soon as the backup is complete, if the files are not STREAM (we will see it later), or must be done on Restore if STREAM was use.
Xtrabackup is compose by two main parts, the innobackupex script wrapper, and the xtrabackup.
the Xtrabackup binary has three different version:
- xtrabackup
- xtrabackup_51
- xtrabackup_55
Binary change in respect to the mysql binary version and are automatically selected from innobackupex as follow:
MySQL 5.0.* - xtrabackup_51
MySQL 5.1.* - xtrabackup_51
MySQL 5.1.* with InnoDB plugin - xtrabackup
Percona Server >= 11.0 - xtrabackup
MySQL 5.5.* - xtrabackup_55
It is important to note that while the backup of InnoDB tables is taken with minimal impact, the backup of MyISAM still require a full tables lock.
The full process can be describe as follow:
1. check connection to MySQL
2. start the xtrabackup as child process
3. wait untill xtrabackup suspend the process
4. connect to mysql
5. if sever is a slave wait for replication to catch-up
6. if server is a master it returns right away
7. flush tables and acquire a read lock (unless explicitly ask in the settings to DO NOT get lock)
8. write slave information
9. perform physical write of the files
10. resume xtrabackup process
11. unlock tables
12. close connection to mysql
13. copy last LRU information
14. write backup status report

# 2. User and Grants

Backup user SHOULD not be a common user or a DBA user but it should be one created for this operation as below:
 1 2 3 4 5  CREATE USER 'backup'@'localhost' IDENTIFIED BY 'bckuser123'; REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'backup'@'localhost'; GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'backup'@'localhost'; FLUSH PRIVILEGES;  

# 3. How to invoke the Xtrabackup in standard easy way.

This is the easier way to take a FULL backup using Xtrabackup.

/usr/bin/innobackupex-1.5.1 --defaults-file=<path> --slave-info --user=<username> --password=<secret>   /path/to/destination/backup/folder

ie

/usr/bin/innobackupex-1.5.1 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123   /home/mysql/backup/

This will produce a full uncompress backup.
root@mysqlt3:/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02# ll
total 200088
drwxr-xr-x 15 root root      4096 Dec 21 14:41 ./
drwxr-xr-x  3 root root      4096 Dec 21 14:46 ../
-rw-r--r--  1 root root       263 Dec 21 14:32 backup-my.cnf
-rw-r-----  1 root root 104857600 Dec 21 14:32 ibdata1
drwxr-xr-x  2 root root      4096 Dec 21 14:41 mysql/
drwxr-xr-x  2 root root      4096 Dec 21 14:41 performance_schema/
drwx------  2 root root      4096 Dec 21 14:41 security/
drwx------  2 root root      4096 Dec 21 14:41 test/
drwx------  2 root root      4096 Dec 21 14:41 test_audit/
drwx------  2 root root      4096 Dec 21 14:41 timstaging/
drwx------  2 root root      4096 Dec 21 14:41 timtags/
drwxr-xr-x  2 root root      4096 Dec 21 14:41 world/
-rw-r--r--  1 root root        13 Dec 21 14:41 xtrabackup_binary
-rw-r--r--  1 root root        26 Dec 21 14:41 xtrabackup_binlog_info
-rw-r-----  1 root root        85 Dec 21 14:41 xtrabackup_checkpoints
-rw-r-----  1 root root  99912192 Dec 21 14:41 xtrabackup_logfile
-rw-r--r--  1 root root        53 Dec 21 14:41 xtrabackup_slave_info
backup-my.cnf <--------------- very essential version of the my.cnf with innodb information
ibdata1<---------------------- Main tablespace
mysql/ <----------------------
world/<----------------------- DBs ... with files copy in
xtrabackup_binary <----------- contains the name of the xtrabackup binary used
xtrabackup_checkpoints <------ Information regarding the LSN position and range
xtrabackup_logfile <---------- File containing the delta of the modifications
xtrabackup_slave_info <------- Slave information (if slave)


In this case given it is NOT using streaming and it is not compress, you can prepare the file right away:

innobackupex --use-memory=1G --apply-log /home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02

After few operations you will see:
121221 15:57:04  InnoDB: Waiting for the background threads to start
121221 15:57:05 Percona XtraDB (http://www.percona.com) 1.1.8-20.1 started; log sequence number 30312932364

[notice (again)]
If you use binary log and don't use any hack of group commit,
the binary log position seems to be:
InnoDB: Last MySQL binlog file position 0 213145807, file name /home/mysql/instances/mtest1/binlog.000011
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
121221 15:57:05  InnoDB: Starting shutdown...
121221 15:57:09  InnoDB: Shutdown completed; log sequence number 30312932364
121221 15:57:09  innobackupex: completed OK!

When done the files needs to be put back in the right place:

innobackupex --defaults-file=/home/mysql/instances/mtest1/my.cnf --copy-back pwd

Note be sure that destination is empty both DATA and IB_LOGS.
If you can just rename the directories, and create new ones.
When copy is over:
innobackupex: Starting to copy InnoDB system tablespace
innobackupex: in '/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02'
innobackupex: back to original InnoDB data directory '/home/mysql/instances/mtest1/data'
innobackupex: Copying '/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02/ibdata1' to '/home/mysql/instances/mtest1/data/ibdata1'
innobackupex: Starting to copy InnoDB log files
innobackupex: in '/home/mysql/backup/2012_12_21_1300/2012-12-21_14-32-02'
innobackupex: back to original InnoDB log directory '/home/mysql/logs/mtest1/innodblog'
innobackupex: Finished copying back files.
121221 16:41:38  innobackupex: completed OK!

Modify the permission on the data directory:

chown -R mysql:mysql /home/mysql/instances/mtest1;

Then restart MySQL, you will see that mysql will recreate the iblogs as well, given we have removed them but this is ok because we have already apply all the changes.
121221 16:44:08 mysqld_safe Starting mysqld daemon with databases from /home/mysql/instances/mtest1/data
121221 16:44:09 [Note] Plugin 'FEDERATED' is disabled.
121221 16:44:09 InnoDB: The InnoDB memory heap is disabled
121221 16:44:09 InnoDB: Mutexes and rw_locks use InnoDB's own implementation
121221 16:44:09 InnoDB: Compressed tables use zlib 1.2.3
121221 16:44:09 InnoDB: Using Linux native AIO
121221 16:44:09 InnoDB: Initializing buffer pool, size = 1.0G
121221 16:44:09 InnoDB: Completed initialization of buffer pool
121221 16:44:09  InnoDB: Log file /home/mysql/logs/mtest1/innodblog/ib_logfile0 did not exist: new to be created
InnoDB: Setting log file /home/mysql/logs/mtest1/innodblog/ib_logfile0 size to 100 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: Progress in MB: 100
...
121221 16:44:15 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
121221 16:44:15  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Last MySQL binlog file position 0 213145807, file name /home/mysql/instances/mtest1/binlog.000011
121221 16:44:17  InnoDB: Waiting for the background threads to start
21221 16:44:18 InnoDB: 1.1.8 started; log sequence number 30312933388
121221 16:44:18 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3310
121221 16:44:18 [Note]   - '0.0.0.0' resolves to '0.0.0.0';
121221 16:44:18 [Note] Server socket created on IP: '0.0.0.0'.
121221 16:44:18 [Note] Event Scheduler: Loaded 0 events
121221 16:44:18 [Note] /home/mysql/templates/mysql-55p/bin/mysqld: ready for connections.
Version: '5.5.27-log'  socket: '/home/mysql/instances/mtest1/mysql.sock'  port: 3310  MySQL Community Server (GPL)

Checking the content we will have all data back:
 1 2 3 4 5 6 7 8 9  +--------------+--------+--------+----------+----------+-----------+----------+ | TABLE_SCHEMA | ENGINE | TABLES | ROWS | DATA (M) | INDEX (M) | TOTAL(M) | +--------------+--------+--------+----------+----------+-----------+----------+ | test | InnoDB | 51 | 9023205 | 5843.14 | 1314.62 | 7157.76 | | test | NULL | 51 | 9023205 | 5843.14 | 1314.62 | 7157.76 | | test_audit | InnoDB | 9 | 1211381 | 658.54 | 230.54 | 889.09 | | test_audit | NULL | 9 | 1211381 | 658.54 | 230.54 | 889.09 | | NULL | NULL | 61 | 10234586 | 6501.68 | 1545.17 | 8046.86 | +--------------+--------+--------+----------+----------+-----------+----------+ 7 rows in set (6.92 sec)

# 4. How to BACKUP the Xtrabackup Using compression

One of the pain in backup compression is the compression process, that could take very long time and could be very inefficient.
We have choose to use pigz which is a tool for Parallel Implementation of gzip.
This combine with the --stream option of xtrabackup generate very compact backup files in shorter time.
The only thing to remember is that YOU CANNOT apply the logs on streaming, so you MUST do it after in the Restore phase.
So given our database as before now we have to be sure that pigz is in place:
#pigz --version
pigz 2.1.6

If this is returned (or another version) all ok.
Otherwise  you need to install it:

apt-get install pigz (debian)

yum install pigz (centos)

To execute the backup we will just change the last part of our command as follow:

./innobackupex-1.5.1 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123  --stream=tar ./ | pigz -p4 - > /home/mysql/backup/2012_12_21_1300/full_mtest1.tar.gz

Once the copy is over you will have a file like this:
drwxr-xr-x  3 root root 4.0K Dec 21 17:08 ./
drwxr-xr-x  3 root root 4.0K Dec 21 14:16 ../
drwxr-xr-x 15 root root 4.0K Dec 21 16:31 2012-12-21_14-32-02/
-rw-r--r--  1 root root 737M Dec 21 17:18 full_mtest1.tar.gz <-------------

The whole process on a descktop machine takes:
121221 17:09:52  innobackupex-1.5.1: Starting mysql with options:  --defaults-file='/home/mysql/instances/mtest1/my.cnf' --password=xxxxxxxx --user='backup' --unbuffered --
121221 17:18:29  innobackupex-1.5.1: completed OK!

Less then 10 minutes for 8GB data, not excellent but it was running on a vere low level machine.
The file is then ready to be archive, or in our case to be copy over the slave for recovery.

# 5. How to RESTORE using Xtrabackup from stream

Once we have the file on the target machine we have to expand it.
Very important her is to use the  -i option, this because otherwise the blocks of zeros in archive  will be read as EOF (End Of File), and your set of files will be a mess.
So the string will be something like:

tar -i -xzf full_mtest1.tar.gz

Again after the operation we will have:
-rw-r--r-- 1 root root       269 Dec 21 17:10 backup-my.cnf
-rw-rw---- 1 root root 104857600 Dec 21 17:12 ibdata1
drwxr-xr-x 2 root root      4096 Dec 21 17:40 mysql
drwxr-xr-x 2 root root      4096 Dec 21 17:39 performance_schema
drwxr-xr-x 2 root root      4096 Dec 21 17:39 security
drwxr-xr-x 2 root root      4096 Dec 21 17:39 test
drwxr-xr-x 2 root root      4096 Dec 21 17:39 test_audit
drwxr-xr-x 2 root root      4096 Dec 21 17:39 timstaging
drwxr-xr-x 2 root root      4096 Dec 21 17:39 timtags
drwxr-xr-x 2 root root      4096 Dec 21 17:39 world
-rw-r--r-- 1 root root        13 Dec 21 17:18 xtrabackup_binary
-rw-r--r-- 1 root root        26 Dec 21 17:18 xtrabackup_binlog_info
-rw-rw---- 1 root root        85 Dec 21 17:18 xtrabackup_checkpoints
-rw-rw---- 1 root root 282056704 Dec 21 17:18 xtrabackup_logfile
-rw-r--r-- 1 root root        53 Dec 21 17:18 xtrabackup_slave_info

Note this time the information about the binary logs will be CRUCIAL.
Move or delete the old data directory and ib_log.
We have to apply the logs so assuming we have our file set in /home/mysql/recovery:

innobackupex --use-memory=1G --apply-log /home/mysql/recovery

Check CAREFULLY the ouput of the process if everithing is fine you will have something like this:
121221 17:52:17  InnoDB: Starting shutdown...
121221 17:52:21  InnoDB: Shutdown completed; log sequence number 30595333132
121221 17:52:21  innobackupex: completed OK!


Otherwise you must investigate, the most common issues are:
• forgot -i in the expand
• space on disk
When copy is over:
121221 18:04:48  innobackupex: completed OK!

Change the permissions

chown -R mysql:mysql /home/mysql/instances/mtestslave

Start the mysql server.
Again check the mysql error log:
121221 18:06:38 mysqld_safe Starting mysqld daemon with databases from /home/mysql/instances/mtestslave/data
121221 18:06:39 [Note] Plugin 'FEDERATED' is disabled.
121221 18:06:39 InnoDB: The InnoDB memory heap is disabled
121221 18:06:39 InnoDB: Mutexes and rw_locks use InnoDB's own implementation
121221 18:06:39 InnoDB: Compressed tables use zlib 1.2.3
121221 18:06:39 InnoDB: Using Linux native AIO
121221 18:06:39 InnoDB: Initializing buffer pool, size = 1.0G
121221 18:06:39 InnoDB: Completed initialization of buffer pool
121221 18:06:39  InnoDB: Log file /home/mysql/logs/mtestslave/innodblog/ib_logfile0 did not exist: new to be created
InnoDB: Setting log file /home/mysql/logs/mtestslave/innodblog/ib_logfile0 size to 10 MB
InnoDB: Database physically writes the file full: wait...
121221 18:06:40 InnoDB: highest supported file format is Barracuda.
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
121221 18:06:40  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Last MySQL binlog file position 0 150497896, file name /home/mysql/instances/mtest1/binlog.000001
121221 18:06:42  InnoDB: Waiting for the background threads to start
121221 18:06:43 InnoDB: 1.1.8 started; log sequence number 30595333644
121221 18:06:43 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3311
121221 18:06:43 [Note]   - '0.0.0.0' resolves to '0.0.0.0';
121221 18:06:43 [Note] Server socket created on IP: '0.0.0.0'.
121221 18:06:43 [Note] Event Scheduler: Loaded 0 events
121221 18:06:43 [Note] /home/mysql/templates/mysql-55p/bin/mysqld: ready for connections.
Version: '5.5.27-log'  socket: '/home/mysql/instances/mtestslave/mysql.sock'  port: 3311  MySQL Community Server (GPL)


And now is time to log in check the data set AND fix replication.
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37  root@localhost [(none)]> show schemas; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | security | | test | | test_audit | | world | +--------------------+ 13 rows in set (0.04 sec) root@localhost [(none)]> SELECT TABLE_SCHEMA, ENGINE, COUNT(1) as 'TABLES', sum(TABLE_ROWS) as 'ROWS', TRUNCATE(sum(DATA_LENGTH)/pow(1024,2),2) as 'DATA (M)', TRUNCATE(sum(INDEX_LENGTH)/pow(1024,2),2) as 'INDEX (M)', TRUNCATE((sum(DATA_LENGTH)+sum(INDEX_LENGTH))/pow(1024,2),2) AS 'TOTAL(M)' FROM information_schema.tables WHERE TABLE_SCHEMA <> 'information_schema' AND TABLE_SCHEMA <> 'mysql' AND TABLE_SCHEMA not like 'avail%' AND TABLE_SCHEMA <> 'maatkit' AND TABLE_TYPE = 'BASE TABLE' GROUP BY TABLE_SCHEMA, ENGINE WITH ROLLUP; +--------------------+--------------------+--------+----------+----------+-----------+----------+ | TABLE_SCHEMA | ENGINE | TABLES | ROWS | DATA (M) | INDEX (M) | TOTAL(M) | +--------------------+--------------------+--------+----------+----------+-----------+----------+ | performance_schema | PERFORMANCE_SCHEMA | 17 | 23014 | 0.00 | 0.00 | 0.00 | | performance_schema | NULL | 17 | 23014 | 0.00 | 0.00 | 0.00 | | security | InnoDB | 1 | 1454967 | 170.73 | 60.75 | 231.48 | | security | NULL | 1 | 1454967 | 170.73 | 60.75 | 231.48 | | test | InnoDB | 51 | 9298913 | 6058.39 | 1347.78 | 7406.17 | | test | NULL | 51 | 9298913 | 6058.39 | 1347.78 | 7406.17 | | test_audit | InnoDB | 9 | 1189343 | 685.56 | 236.56 | 922.12 | | test_audit | NULL | 9 | 1189343 | 685.56 | 236.56 | 922.12 | | world | MyISAM | 3 | 5302 | 0.35 | 0.06 | 0.42 | | world | NULL | 3 | 5302 | 0.35 | 0.06 | 0.42 | | NULL | NULL | 227 | 11971539 | 6916.70 | 1645.74 | 8562.44 | +--------------------+--------------------+--------+----------+----------+-----------+----------+  

So far so good.
Now is time to modify the slave.
First take the current status:
 1 2 3  root@localhost [(none)]> SHOW slave STATUS\G Empty SET (0.00 sec) root@localhost [(none)]>

Ok nothing, good.
Assign the master AND the log file and position from xtrabackup_binlog_info.
 1 2  cat xtrabackup_binlog_info binlog.000001 150497896

Prepare the command as:

 change master to master_host='192.168.0.3', master_port=3310,master_user='replica',master_password='xxxx', master_log_file='binlog.000001',master_log_pos=150497896;

Check again:
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90  root@localhost [(none)]> SHOW slave STATUS\G *************************** 1. row *************************** Slave_IO_State: Master_Host: 192.168.0.3 Master_User: replica Master_Port: 3310 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 150497896 Relay_Log_File: mysql-relay-bin.000001 Relay_Log_Pos: 4 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: No Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 150497896 Relay_Log_Space: 107 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 0 1 row IN SET (0.00 sec) root@localhost [(none)]> Perfect start the slave:  slave start;  AND CHECK again: root@localhost [(none)]> SHOW slave STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting FOR master TO send event Master_Host: 192.168.0.3 Master_User: replica Master_Port: 3310 Connect_Retry: 60 Master_Log_File: binlog.000001 Read_Master_Log_Pos: 206843593 Relay_Log_File: mysql-relay-bin.000002 Relay_Log_Pos: 22872 Relay_Master_Log_File: binlog.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 150520518 Relay_Log_Space: 56346103 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 30 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 3310 1 row IN SET (0.00 sec)

Ok we have some delay as expected by all is running as it should.
Our server is up and running.

# 6. How to do INCREMENTAL BACKUP with Xtrabackup

Incremental backup works in a different way.
To understand it correctly we need to remember that InnoDB pages have a sequence number LSN (Log Sequence Number), given that, each incremental backup starts from the previous stored LSN.

Incremental backup must have a first FULL Backup as base, then each following incremental, will be stored in a different directory (by timestamp).

To restore the incremental backup the full set of incremental, from the BASE to the last point in time, need to be apply.
So if we have the Full Backup done on Monday, and incremental are taken every day, if we need to restore the full set on Friday, we must apply the logs on the BASE (Monday) following the chronological order, Monday (base), then Tuesday, Wednesday, Thursday, Friday.

Only at that point we will have the full set of data, that can replace the one we were having on the server.

To remember that this works only for InnoDB, other storage engines like MyISAM are copy in full every time.

## 6.1. Let this work without compression

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123   /home/mysql/backup/

The new directory 2013-01-10_13-07-24 is the BASE.

Checking the files inside we can check the LSN position:
root@tusacentral03:/home/mysql/backup/2013-01-10_13-07-24# cat xtrabackup_checkpoints
backup_type = full-backuped
from_lsn = 0
to_lsn = 32473279827
last_lsn = 32473279827
Last LSN is 32473279827

As exercise let us do TWO incremental backup starting from this base, but first add some data...
 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18  root@localhost [test]> SHOW processlist; +-----+--------+---------------------------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+--------+---------------------------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+ | 87 | root | localhost | test | Query | 0 | NULL | SHOW processlist | | 92 | stress | tusacentral01.LOCAL:37293 | test | Sleep | 0 | | NULL | | 94 | stress | tusacentral01.LOCAL:37296 | test | Query | 0 | UPDATE | INSERT INTO tbtest30 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),731188002,"hd rsg | | 95 | root | localhost:37295 | test | Query | 0 | update | INSERT INTO test_audit.tbtest4 values(NEW.autoInc,NEW.a,NEW.uuid,NEW.b,NEW.c,NEW.counter,NEW.time,NE | | 96 | stress | tusacentral01.local:37298 | test | Query | 0 | NULL | COMMIT | | 97 | root | localhost:37299 | test | Query | 0 | update | INSERT INTO test_audit.tbtest4 values(NEW.autoInc,NEW.a,NEW.uuid,NEW.b,NEW.c,NEW.counter,NEW.time,NE | | 98 | stress | tusacentral01.local:37300 | test | Query | 0 | update | insert INTO tbtest15 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),598854171,"usfcrgl | | 99 | root | localhost:37301 | test | Query | 0 | UPDATE | INSERT INTO test_audit.tbtest4 VALUES(NEW.autoInc,NEW.a,NEW.uuid,NEW.b,NEW.c,NEW.counter,NEW.time,NE | | 100 | stress | tusacentral01.LOCAL:37302 | test | Query | 0 | UPDATE | INSERT INTO tbtest15 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),22723485,"vno ehhr | | 101 | stress | tusacentral01.local:37303 | test | Query | 0 | update | insert INTO tbtest1 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),991063177,"nqdcogeu | | 102 | stress | tusacentral01.LOCAL:37304 | test | Query | 0 | UPDATE | INSERT INTO tbtest1 (uuid,a,b,c,counter,partitionid,strrecordtype) VALUES(UUID(),86481207,"sdfabnogn | | 103 | stress | tusacentral01.local:37305 | test | Query | 0 | NULL | COMMIT | +-----+--------+---------------------------+------+---------+------+--------+------------------------------------------------------------------------------------------------------+ 12 rows in set (0.00 sec)

Now let us create the first incremental backup:

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental --incremental-basedir=/home/mysql/backup/2013-01-10_13-07-24 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123   /home/mysql/backup/

After all the process is complete, we will have TWO directories:
total 20
drwxr-xr-x  5 root  root  4096 Jan 10 13:30 ./
drwxr-xr-x 18 mysql mysql 4096 Dec 28 12:16 ../
drwxr-xr-x 15 root  root  4096 Jan 10 13:17 2013-01-10_13-07-24/
drwxr-xr-x 15 root  root  4096 Jan 10 13:34 2013-01-10_13-30-43/ <-------- the last one is the Incremental


I was inserting data mainly on the TEST schema, and as you can see test is the one that HAS more data in, which represent the DELTA:
root@tusacentral03:/home/mysql/backup/2013-01-10_13-30-43# du -sh *
4.0K    backup-my.cnf
4.5M    ibdata1.delta
4.0K    ibdata1.meta
1.5M    mysql
212K    performance_schema
18M        security <---------------------------------
1.2G    test <---------------------------------
173M    test_audit <---------------------------
488K    world
4.0K    xtrabackup_binary
4.0K    xtrabackup_binlog_info
4.0K    xtrabackup_checkpoints
4.0K    xtrabackup_logfile
4.0K    xtrabackup_slave_info


On top of the usual files, in the schema directory and per table I will find some additional inormations inside the tablexyz.ibd.meta file
root@tusacentral03:/home/mysql/backup/2013-01-10_13-30-43/test# cat tbtest1.ibd.meta
page_size = 16384
zip_size = 0
space_id = 1983

Checking the file xtrabackup_checkpoints you will see the delta related to LSN
root@tusacentral03:/home/mysql/backup/2013-01-10_13-30-43# cat xtrabackup_checkpoints
backup_type = incremental
from_lsn = 32473279827 <------------ starting point
to_lsn = 33215076229   <------------ End point
last_lsn = 33215076229

Let us add other data and take another incremeental.
root@tusacentral03:/opt/percona-xtrabackup-2.0.4/bin# /opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental \
--incremental-basedir=/home/mysql/backup/2013-01-10_13-30-43/ \
--defaults-file=/home/mysql/instances/mtest1/my.cnf  \
--slave-info --user=backup --password=bckuser123   /home/mysql/backup/

There is a HUGE difference from the previous command, the BASEDIR change, and must be the las incremental.
Given this is not always possible it is good practices when working with scripts to store the LAST LSN in the xtrabackup_checkpoints and pass it as parameter with:

--incremental-lsn=xyz

This is the more elegant and flexible way.

Ok NOW we have 3 Directory:
drwxr-xr-x 15 root  root  4096 Jan 10 13:17 2013-01-10_13-07-24/
drwxr-xr-x 15 root  root  4096 Jan 10 13:34 2013-01-10_13-30-43/ <--------- First incremental
drwxr-xr-x 15 root  root  4096 Jan 10 14:02 2013-01-10_13-57-04/ <--------- Second incremental

To have a full backup we have now to rebuild the set from the BASE then First incremental then Second Incremental, to do so we need to apply the changes but NOT the rollback operation.
If we forgot and perform ALSO the rollback, we will not be able to continue applying the incremental backups.
To do so there are two ways, explicit and implicit:
• Explicit --apply-log --redo-only
• Implicit --apply-log-only
I like the Explicit because you know exactly what you pass also if this can be more verbose, so my commands will be:

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_13-07-24

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_13-07-24 --incremental-dir=/home/mysql/restore/2013-01-10_13-30-43

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_13-07-24 --incremental-dir=/home/mysql/restore/2013-01-10_13-57-04

Once done the BASE directory will contains the up to date information including the binary log position:
root@tusacentral03:/home/mysql/restore/2013-01-10_13-07-24# cat xtrabackup_binlog_info
binlog.000005    275195253     <------------ Original from Base
root@tusacentral03:/home/mysql/restore/2013-01-10_13-07-24# cat ../../backup/2013-01-10_13-07-24/xtrabackup_binlog_info
binlog.000003    322056528     <------------ Up to date from incremental


It is now time to have all finalize it:

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log /home/mysql/restore/2013-01-10_13-07-24

At this point we just need to copy back on the slave the content of BASE directory /home/mysql/restore/2013-01-10_13-07-24, and change the permissions.
[root@tusacentral07 data]# scp -R

<!--
var prefix = 'm&#97;&#105;lt&#111;:';
var suffix = '';
var attribs = '';
var path = 'hr' + 'ef' + '=';
var addy49591 = 't&#117;s&#97;' + '&#64;';
document.write( '<a ' + path + '\'' + prefix + addy49591 + suffix + '\'' + attribs + '>' );
document.write( '<\/a>' );
//-->

<!--
document.write( '<span style=\'display: none;\'>' );
//-->
Questo indirizzo e-mail è protetto dallo spam bot. Abilita Javascript per vederlo.

<!--
document.write( '</' );
document.write( 'span>' );
//-->
.0.3:/home/mysql/restore/2013-01-10_13-07-24/*  .
[root@tusacentral07 data]# sudo chown -R mysql:mysql .


At this point if this is a slave we just to setup the replication from the last binlog and positon as usual, otherwise all done and we can restart the server.

## 6.2. Incremental with compression

To perform the incremental + compression the process is the same but instead tar we need to use xbstream, for documentation I have add the --incremental-lsn with the value from the latest backup,
at this point add some data, and take the backup again.
Given I don't have the previous set of FULL + Incremental1 + Incremental2 UNPREPARED, I will take again 1 full and to compress incremental.

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental\

--incremental-lsn=34020868857 \

--defaults-file=/home/mysql/instances/mtest1/my.cnf \

--slave-info --user=backup --password=bckuser123 \

--extra-lsndir=/home/mysql/backup/ \

--stream=xbstream --parallel=4 ./ |pigz -p4 - > /home/mysql/backup/incremental_2013_01_10_19_05.gz

To note is the parameter  --extra-lsndir which allow you to specify an additional location for the LSN file position,
this is very important because it needs to be "grep" for the next incremental backup.
Like:
grep last_lsn xtrabackup_checkpoints|awk -F' = ' '{print \$2}'
34925032837

and the parameter --parallel=4 to implement multi thread streaming
So next will be:
/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental \
--incremental-lsn=34925032837 \
--defaults-file=/home/mysql/instances/mtest1/my.cnf \
--extra-lsndir=/home/mysql/backup/ \
--stream=xbstream --parallel=4 ./ |pigz -p4 - > /home/mysql/backup/incremental_2013_01_11_11_35.gz

Once done taking again the LSN value it will be 35209627102
At this point we have a compress incremental backup using xbstream and pigz.
Point is can we restore it correctly?
copy all the files to the resore area/server
root@tusacentral03:/home/mysql/backup# ll
total 631952
drwxr-xr-x  3 root  root       4096 Jan 11 11:36 ./
drwxr-xr-x 19 mysql mysql      4096 Jan 10 15:01 ../
drwxr-xr-x 15 root  root       4096 Jan 10 17:27 full_2013_01_10_18_54.gz
-rw-r--r--  1 root  root  360874358 Jan 11 11:25 incremental_2013_01_10_19_05.gz
-rw-r--r--  1 root  root  286216063 Jan 11 11:41 incremental_2013_01_11_11_35.gz
-rw-r--r--  1 root  root         93 Jan 11 11:41 xtrabackup_checkpoints


then to expand it:

pigz -d -c full_2013_01_10_18_54.gz | xbstream -x -v

create 2 directory:

mkdir 2013_01_10_19_05

mkdir 2013_01_11_11_35

Then

pigz -d -c incremental_2013_01_10_19_05.gz | xbstream -x -v

pigz -d -c incremental_2013_01_11_11_35.gz | xbstream -x -v

After that the procedure will be the same.

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_17-15-27

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_17-15-27 --incremental-dir=/home/mysql/restore/2013_01_10_19_05

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log --redo-only /home/mysql/restore/2013-01-10_17-15-27 --incremental-dir=/home/mysql/restore/2013_01_11_11_35

Finalize the process

/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --use-memory=1G --apply-log /home/mysql/restore/2013-01-10_17-15-27

Copy in the production location:
To remove possible not needed files :

find . -name "*.TR*" -exec  \rm -v '{}' \;

Assign correct grants to mysql user

chown -R mysql:mysql data

restart and if slave set the right binlog and position as before
Done!

# 7. Incremental with compression and NetCat

There are two possible ways to perform the copy with NetCat:
• one is "on the fly" means that the stream instead being direct to a local file it is directly push on the "Recovery" server.
• the other is to write the file then push it to the "Recovery" server.
Using the the "on the fly" is in my opinion conceptually dangerous.
This because a backup operation should be as more solid as possible.
Having the stream directed to the final server is opening to possible issue at any network glitch.
Any network flotation could affect the whole backup, and there could be also possible scenario where a full transmitted backup will result corrupted.
This because IF a network issue happen during the transfer the process on the source or destination server, the one DOING the backup or the one receiving can crash or hung.
All the above impose a sanity check on the process and on the final result, to be sure that in case of failure the backup will be take again, or at least there will be awareness about the issue.

Needs to be say that the process is not so fragile when dealing with small amount of data, but it could become much more concerning when dealing with Gigs because resource allocation limit on the source machine.

The NetCat solution see two elements in our case:

• server (sender)
This is valid in our case but needs mention that the server can also get input from the client, but this is not a topic here.

## 7.1. The on the fly

The backup process is suppose to be launched on the server with the following statement:
/opt/percona-xtrabackup-2.0.4/bin/innobackupex-1.5.1 --incremental  --incremental-lsn=35209627102 --defaults-file=/home/mysql/instances/mtest1/my.cnf --slave-info --user=backup --password=bckuser123 --extra-lsndir=/home/mysql/backup/ --stream=xbstream --parallel=4 ./ |pigz -p4 - | nc -l 6666
while on client :
nc 192.168.0.3 6666|pv -trb > /home/mysql/recovery/incremental_2013_01_14_12_05.gz
So the only difference is the add of the NetCat commands and obviously the need to have it done on the client.
Once the process is over, the expand can be done as usual:
pigz -d -c incremental_2013_01_14_12_05.gz | xbstream -x -v

## 7.2. Two steps process

Is exactly the same of the one "Incremental with compression", but instead doing a file copy issue the commands:
on the server:
cat /home/mysql/backup/incremental_2013_01_14_12_05.gz | nc -l 6666| pv -rtb
On the client:
nc 192.168.0.3 6666|pv -trb > /home/mysql/recovery/incremental_2013_01_14_12_05.gz

## 7.3. Conclusion

I think it could make sense to use NetCat ONLY in very specific cases, and only developing solid scripts around it, including in them:
• status checks of the backup operation
• list of the transmitted files
• LSN position validation
• network status/monitor during the operations
In short a possible nightmare.

# 8. Check lists

## 8.1. Simple backup

[] Check binary version
[] Check binaries are accessible in the PATH and accessible
[] Assign correct user/password in MySQL for backup user
[] Create or check backup data destination folder
[] Check my.cnf for datadir and be sure is pointing in the right place
[] Execute backup
[] Apply logs

## 8.2. Simple restore

[] be sure mysql server is down
[] remove / move data from original directory
[] remove / move ib_logs from original directory
[] run innobackupex --copy-back
[] check file permissions for mysql
[] start mysql
[] check the mysql log for error
[] log in and check for data.

## 8.3. Backup with Stream and compression

[] Check binary version
[] Check binaries are accessible in the PATH and accessible
[] Assign correct user/password in MySQL for backup user
[] Create or check backup data destination folder
[] Check my.cnf for datadir and be sure is pointing in the right place
[] Check for Pigz presence and version
[] Execute backup

## 8.4. Restore from Stream on a different machines (slave)

[] be sure mysql server is down
[] remove / move data from original directory
[] remove / move ib_logs from original directory
[] copy over the compress file
[] expand the backup in a safe directory
[] run innobackupex --copy-back
[] check file permissions for mysql
[] check that server WILL NOT restart the slave process on start
[] start mysql
[] check the mysql log for error
[] log in and check for data.
[] take the master log position
[] check for slave process information
[] apply new binary log position
[] restart slave
[] check slave status

## 8.5. Incremental Backup with Stream and compression

[] Check binary version
[] Check binaries are accessible in the PATH and accessible
[] Assign correct user/password in MySQL for backup user
[] Create or check backup data destination folder
[] Check my.cnf for datadir and be sure is pointing in the right place
[] Check for Pigz presence and version
[] Check for LSN file postion in xtrabackup_checkpoints
[] Assign the LSN to the "incremental-lsn" parameter
[] Be sure that the --extra-lsndir parametr is present and pointing to an existing directory
[] Execute backup

## 8.6. Incremental Restore from Stream on a different machines or slave

[] be sure mysql server is down
[] remove / move data from original directory
[] remove / move ib_logs from original directory
[] copy over the compress files
[] validate the chronological order from the BASE to the last increment
{loop for each file set}
[] expand the backup in a safe directory one a time
[] be sure that you apply log with "--apply-log --redo-only" parameters every time
[] be sure you always have the correct destination directory set (BASE set)
[] remove the incremental once apply
{loop end}
[] run innobackupex --apply-log on the BASE set
[] remove IB_log files
[] copy files to destination directory
[] check file permissions for mysql
[] check that server WILL NOT restart the slave process on start
[] start mysql
[] check the mysql log for error
[] log in and check for data.
[] take the master log position
[] check for slave process information
[] apply new binary log position
[] restart slave
[] check slave status

Это "Симс история робинзонов скачать"задание Крименко получил от самого премьера.

Наверно, никто еще не ждал с таким волнением ответа "Скачать игру бои без правил через торрент бесплатно"от человека, который бредил в тяжком забытьи.

Все правители России от царских династий "Детская игра одевать кукол"до партии позволяли этим "Скачать звук моторов"людям жить так, как они хотят, предоставляя им автономию в рамках "Игра марио для компьютера скачать"государства.

Ну, я совсем не хотел причинять детям вреда, пробормотал Генри, теребя свои пухлые руки, "Скачать антивирус яндекс касперский"а Рудольф сказал, что если ребенок умирает, еще не зная, что Санта0-Клауса ""на самом деле "Нэнси дрю белый волк ледяного ущелья скачать игру"нет, то он попадет в рай и будет всегда смеяться.

Некоторые "Война и мир кратко содержание скачать"из ее цветов уже осыпались, и на их месте виднелись красные, как кораллы, шишки с семенами пожалуй, "Мобильные одноклассники скачать"не менее красивые, чем цветы.

Временно все функции управления берет на себя председатель.

Я подружился с "Алан пиз скачать книга"министром,-продолжал Сэмми,-и стал расспрашивать о той его реплике "Игры ны двоих"по поводу давних связей между Индией и Кореей.

Его стол "Вирус баннер скачать"был буквально ""завален предложениями из Северной Кореи.

Хотя от изнурительной ходьбы по острову снова ""разболелась нога, голова ""оставалась ясной.

Он сдержал свое слово; Рубин не солгал.

Представив, как Элен, с офтальмоскопом ""в нагрудном кармане белого халата, критически заглядывает ""в глаза какому-нибудь малышу у себя ""в клинике, Мейтланд ""посмотрел на свою пораненную руку.

Вертится, вертится, а не могу вспомнить.

Трещина ""в зеркале делила ее "Скачать касперский 11"лицо пополам.

Поток, вовне утекая, ""Стремится к Крапивнику Каю, Сверкая ""ци.

Теперь воды залива Гальвестона "Игра казаки последний довод королей скачать бесплатна"казались сном, а "Samsung ml 1610 драйвер"реальностью была земля, завоеванная с "Онлайн игра для телефона скачать"таким трудом.

Наверное, ты прав, "Скачать эмулятора ps3"согласилась Эйрадис.

Рука Бореля "Winamp скачать бесплатно русская версия торрент"в повороте потянулась к "Гагарина колыбельная скачать песню"рукоятке клинка.

Это опять началось, а я так и не понял, что это кончалось!

Римо чувствовал себя, как школьник, впервые "Судьба человека скачать книгу"пришедший на "Скачать музыку дельфинов"танцы, когда не знаешь, что делать и "автокредит ниссан"что говорить.

От-везите его домой и дайте покой.

Его авторитет среди "Total video converter crack-скачать"соратников пошатнулся бы, тут и сомневаться не приходится.

Окно в небе "Игры скачать карти"даст нам "Взломщик игр от alavar"возможность использовать всю энергию солнца.

На поясе полицейского висел пистолет в черной кобуре.

Сегодня ночью он не "Движок для игр скачать"единственный, кто вооружился арбалетом.

Замок, сказал он ей с наслаждением барахтаясь в своей вонючей луже, всегда существовал и всегда будет "Книга головачева скачать"существовать.

Два водителя прохаживались по автостраде, глядя на яркое пламя.

Тогда я, должен "Скачать wow патч 3 3 3"признаться, не вижу или почти не вижу оснований возражать против этого плана.

Найалис "Видео для взросл скачать"возникла перед ним подобно башне из черных огней.

Я же на самом деле хотел "Третий лишний на телефон скачать торрент"уйти, скрыться и.

Ultimo aggiornamento Domenica 05 Maggio 2013 03:51

16
Dic
2012
 MySQL Ndb & MySQL with Galera why we should not compare them.
 Scritto da Marco Tusa
Contents[Hide]

# 1. Overview

In the last few months, we have seen more and more discussion on the use of Galera solution for MySQL Clustering.

I have being one of those that had heavily test and implement Galera solution, actually with quite good results and I have also presented SOME of them at Oracle Connect.

On the other side I have be work with MySQL NDB for years (at least from 2007) at many customers site, from simple to complex setups.

So also if I cannot consider myself as mega expert, I think to have some good experience and insight on both platform.

The point here is that I was not happy in reading some articles comparing the two, not because the kind of tests, or results.

Not because I prefer this or that, but simply because, from my point of view, it does not make any sense to compare the two.

We can spend pages and pages in discussing the point here, but I want try to give a simple generalize idea of WHY it makes no sense in few lines.

# 2. NDB brief list

• NDB is not a simple storage engine and can work independently, MySQL is “just “ a client.
• NDB is mainly an in memory database and also if it support table on disk the cost of them not always make sense.
• NDB is fully synchronous, no action can be returned to client until transactions on all nodes are really accepted.
• NDB use horizontal partition to equally distribute data cross node, but none of them has the whole dataset (unless you use one node group only, which happens ONLY when you don’t know how to use it).
• NDB Replicate data for a specific factor, which is the number of replica, and that replication factor will not change with the increase of the nodes number.
• Clients retrieve data from NDB as whole, but internally data is retrieve by node, often use parallel execution. (Not going in the details here on the difference between select methods like match by key, range, IN option and so on).
• NDB scale by node group that means it really scale in the Dataset dimension it can manage and operation it could execute, and it really scale!

# 3. Galera brief list

• Galera is an additional layer working inside the MySQL context.
• Galera require InnoDB to work.
• Galera offer “virtually synchronous” replication.
• Galera replicate the full dataset across ALL nodes.
• Galera data replication overhead, increase with the number of nodes present in the cluster.
• Galera replicate data from one node to cluster on the commit, but apply them on each node by a FIFO queue (multi thread).
• Galera do not offers any parallelism between the nodes when retrieving the data; clients rely on the single node they access.

# 4. So why they cannot be compare?

It should be quite clear that the two, are very different from starting from the main conceptualization, given NDB is a cluster of many node groups with distribute dataset, while Galera is a very efficient (highly efficient) replication layer.

But just to avoid confusion:

1. NDB does data partition and data distribution with redundant factor.
2. Galera just replicate data all over.
3. NDB apply parallel execution to the incoming request, involving more node groups in data fetch.
4. Galera is not involved at all in the data fetch and clients need to connect to one node or more by themselves, means application require managing parallel request in case of need.
5. In NDB the more node groups you add the more you get in possible operation per second and data archived/retrieved.
6. In Galera the more nodes you add, the more overhead you generate in the replication, so more data will require to be “locally” commit by the replication layer, until when the number of nodes and operations executed on them will compromise the performance for each node.

# 5. Conclusion

NDB Cluster is a real cluster solution, design to internally scale and to perform internally all the required operation to guarantee high availability and synchronous data distribution.

Galera is a very efficient solution to bypass the current inefficient mechanism MySQL has for the replication.

Galera allow to create a cluster of MySQL nodes, in virtually synchronous replication. This with almost zero complexity added on the standard MySQL management.

Never the less the obtaining platform is composed by separate nodes, which for the good or the bad is not a system of distributed data.

Given that, the scenario where we can use Galera or NDB are dramatically different, trying to compare them is like comparing a surfboard with a snowboard.

I love them both, and honestly I am expecting to see Galera deployment to dramatically increase in 2013, but I am still respecting my motto “use the right tool for the job”.

Let us try to make our life easier and avoid confusions.

Happy MySQL to all!!

Ho-ho-ho

Лужайки были пусты насколько "Скачать игру веселая корова"хватало взгляда.

По "Скачать книгу архив"его словам, они и к войне относятся "Скачать футажи футажи рамки"почти совсем как "Гиляровский москва и москвичи скачать"монголы.

Самое что ни на есть "Скачать клипы самые самые"чистое, золото.

Мир устроен весьма забавно, и я не "Игра день победы"боюсь об "Однажды в сказке скачать торрент"этом говорить.

Ну, конечно, если бы они попытались клеветать "Скачать ковбой мальборо и харлей дэвидсон"на Синанджу, эту ярчайшую "Музыку на мобильный телефон скачать"жемчужину цивилизации, хранящуюся на Корейском полуострове, "Скачать песню короли ночной вероны"тогда мы могли бы принять надлежащие меры.

Белая, как мертвая плоть убийцы.

Мысль о том, что надо бы ""нанять машину и покатать бедную "Краткое содержание дикий помещик краткое содержание"Флик, пришла Биллу в голову, когда "Скачать lil wayne mirror"они сидели в кафе, в безопасной зоне.

Когда ты "Скачать клипы в отличном качестве в"ушла от меня, Стиффи, я решил прогуляться и все ""обдумать, прошел по полю Планкетта и хотел перейти на соседнее, ""как вдруг увидел впереди какую-то темную фигуру, "Аристотель произведения"это был он.

Я играл в футбол ""я был защитником.

Следовательно, на него нельзя было слишком полагаться в скучных повседневных делах.

Разумеется, "Скачать виктория токареву"он поселится с нами, так что пусть отрабатывает ""свой хлеб.

спросила Энн, когда они въехали в тихие воды загородного шоссе.

Сердце мое не вынесло бы долго "ренессанс банк автокредит"такого напряжения.

Фиц-Морис, "Скачать вконтакте для андроида"весь в белом, шел под номером первым "Дюсупова во имя жизни скачать"на вороном жеребце по кличке Блэки.

Что ж, значит, дядюшке Уоткину прибавится "Скачать outpost firewall pro crack"работы.

Римо чувствовал, как "Google chrome скачать музыку вконтакте"неодушевленная масса в его объятиях стала теплой, потом "Скачать программу для звукозаписи бесплатно на русском языке"задрожала.

Совсем не болит, Берри глянул на Чиуна и снова улыбнулся.

Ultimo aggiornamento Lunedì 29 Aprile 2013 07:22

20
Ott
2012
 Scritto da Marco Tusa
Contents[Hide]

## 1. One year in Canada

On the 3td of July we have done our first year in Canada, our first anniversary, an important milestone. Thinking back to 1 years ago or even more to 2 years ago when I join the company, it's sounds very surprising how many difficult goals we had successfully achieve. That because when moving a whole family, the number of elements that must be organized and keep on the right track are much more then the one moving alone or just in two people.

The merit of this success doesn't reside on me only, but on all my family that have being working as a team, all the time dealing with the issues we were facing all together.

In short another success story of teamwork!

## 2. Why moving to Canada

As reward for the unbelievable effort our kids did during the past year, we decided to send them to Italy for the vacation. We know they would have loved to spend their time at the beach and sea with cousins and friends.

When they come back my wife and I had a discussion with my son. He asks us to explain him why we decide to move to Canada, and why we decide to remain here.

The question raised after one year was not so silly, and was a good one giving he was not including in it simple topics that could drive to wrong path, like because we like or dislike this or that.

The question was WHY the real long term reason why we choose to move.

Coming from some really underdeveloped country will provide an easy answer, but coming from Europe will make the whole story more difficult.

## 3. Am I sure I want to move?

We did decide to move before I start to work in Pythian, at that time we were evaluating few options keeping into account the destination country and the job offer.

So yes, we did our consideration about it, considering the effort/cost of getting what we want in our country, and what it will be in the country we choose. In that respect we have done a comparison research using mainly the indicators coming from the UBS bank report, defining a short list of possible countries.

To be honest Canada was not in the top three. So why we choose it at the end?

Because the long-term view was indicating that, the country and the company was the right choice. It was the long-term plans that make the difference, when moving a family, the short term should be consider as 3-4 years, but long term is 10 years or more correctly 20 years. Having the chance to get that prospective inside a company is not an easy finding. Finding a country that could host that is not easy as well, so that's why Canada.

## 4. Am I moving with a job?

When you have a job already, also if it is not the perfect one, then make no sense to move without it. If you don't have a job then don't move only because a job offer, it will not gives you a better life in long term, unless you have other reasons for the move.

Common sense apply, in my case I had a good job and good position, so will not have move if the alternative would not have be more then interesting.

## 5. What about the family?

I cannot move without my family, and actually we choose to move mainly because the kids, so no sense to left them behind. More a family is like a small company base on teamwork, it must act in sync and cannot or should not be split.

## 6. What to do before moving

Once decided where to go and having the company sponsoring us, what we had to do was, asking for the work permit, find a decent house, find the school for the kids, all not in this order.

We decided that our priority would have been: School for the kids, House, Work permit, this to remain consistent to our main idea of why moving.

Schools in Ottawa can be Catholic or public, if you are not catholic well easy ... but if you are, that add a variable. Then them can be English or French, given our kids did not talk English (or French) we decide that at least for the first 2 years, Catholic English would have be the best choice.

At that point the issue was to find a school with a good rate, to do that we used the Ottawa site. A good rate is 8, careful to check the history, better to have a 8.1 moving from 7 then a 8.8 moving from 9. We choose the St Marguerite d'Youville in Greenboro, not the best, but the institute is trying to do better. We also spoke with the principal and teachers we like them, oh yes you must plan at least a visit for the house before moving, and the schools, so what we did was to have a list of 3 schools and then when in Ottawa, fix a meeting with the school to get direct feedback.

After that choosing an home is just a matter of time, the ratio to choose the house is 5 minute walking from the school, then 20 minutes from work, and 10 minutes max from the closest food market.

Crossing the line and assign a value of each variable, a value from 1 to 10 in relation to the proximity, helps us to identify where to focus for the house search.

To do the practical search we used this site also we found a good agent (Rocco Manfredi) that was helping us a lot, being really honest and direct in providing advices.

When we had all these in line, we go ahead with the work permit, obviously having the company sponsoring you is good, it helps you a lot. In my case I had apply long ago for working visa in Canada, and I was already approve, at that time I did not move given my work in other countries, but I was quite confident we would not have issue also this time.

When apply for the first time, you can choose to ask for one year work permit, as well as more, given the minimum for applying for permanent work visa is two years once in Canada, why apply for less then that? I ask for three years and I got it in less then a month. That gives us the time to apply directly for the permanent without bothering human resources with additional work, and without putting us under stress. So longer is better, suggestion is to ask for it.

## 7. When is the right time to move?

We came from a warm country, so we decide to move in the right season to get use to the different climate, but we also have to wait for the school to be over. So we decide for the 3 of July (immediately after the Canada Day), one week after the school is over in Italy.

My personal suggestion is never, ever come here in winter or in September/October, you will not have the time to adjust and will suffer more then what you will do in any case.

Also Ottawa is amazing in the summer, and there are so many things to do here that it will help a lot to tune your body and mind.

Having the chance to buy the right cloth for the winter takes a little bit of investigation and understanding.

Coming from a place where -1 Celsius is a crazy cold, we were not prepared to manage -40 C, or at least that require some additional understanding. Coming here in the right season give us the chance to talk with neighbors, friends and colleague to collect advice and do the right choice when go shopping.

## 8. What to do once here?

Here I give few general advices on the base of my direct experience.

Canada is a country with high level of immigrants, and a limited residential population, comparing it with the available space and resource.

More Canada needs immigrants to have his economy prosper and to have people working and paying taxes. That is a very important point, because if you come here and you are willing to work, willing to participate to the country grow, then you will have all the possible help from the institutions. In short being an immigrant, specially if skilled, is not making you a B series, and this can be also seen from the huge effort, Canada institutions are doing to help immigrants to set up correctly.

What you should do once here is to follow little but very important point that will make you a productive resident, and at the same that secure you in case of issue like illness.

One good root resource is: http://ottawa.ca/en/social_com/immigration/index.html for general overview or http://www.ontarioimmigration.ca/en/after/index.htm .

Once you are in your room of the hotel, or pension of your day one, be ready to cover those things right away:

•  Apply for SIN
•  Apply OHIP
•  Open a bank account
•  Get information about working in Ontario
•  Find the services you need, close to home
•  Locate a doctor, dentist or other health services
•  Find a public library and other community services
•  Apply for the Canada Child Tax Benefit (CCTB)
•  Get a map of your community and learn about public transportation
•  Find language classes for you and your family

I did the SIN as soon as I arrive, this is a must to have a job, you can look here to find the one closest to your place. Do not forget, this is priority one, do it right away, as I did.

Then same day run and go for the HOIP, look here  for the office closest to your place and look for a place that has "health" option.

Keep in mind that the Ontario will not cover you right away; you must wait for three months before accessing the medical coverage with OHIP, for more information look here.

You think you have done? No way! Open a bank account if you did not already.

What I did is to open it BEFORE arriving, that for two reasons:

1. That allow you to transfer any money here before arrive;
2. Open it also if with few dollars in, give you access to the history credit system. The sooner you start to establish your history the sooner you will get good conditions from banks and insurance companies. So do not wait!

Choose the bank you like, I am with TD, so far so good.

One note, I am Italian and I am use to bargain on everything, Canadian seems not bargain or at least they state so, but that is not true, they do in a different way.

When you need to open an account in a bank or to find insurance or whatever, do no stop in the first place. Instead go around ask for condition and prices, and then clearly tell them that you are looking around. When you have the best condition/price do the tour again, and ask for better condition/price base on the latest good one. You will be really surprise of the results, I have seen my request scaling up manager to manager up to directors and so on, getting at the end VERY different numbers and conditions.

So you have your account, your SIN, your OHIP, what about the job? I assume you have it actually, so otherwise why are you here? In the case you don't have it this is a decent reference for it here and the job site is here and here.

Important note is that you have to cover all the above in the first week, or better two days after, from your arrive in Canada. Don't waste your time and do what you have to, FAST!

Discovering the neighborhood is important, so look around for the essential service, like pharmacies, clinics, food stores, and whatever you think relevant for your daily life.

About doctors, is important you start to look for a doctor as soon as possible, I did the mistake to wait and it takes me more then 8 month to have a family doctor, this because not all doctors takes new patients.

There is a site  that gives you indication, but honestly it is useless.

Go check directly in the clinics close to you and see if they have spots, it is faster and makes much more sense then waiting for the system to call you.

Hey, you are going to pay taxes right? So start to use your own money! The community centers are there to support you in many activities, sport, arts, reading and so on. Use them, they are much cheaper then other service, or fully free (pay from the taxes). Quality is not number one? Who cares good enough for you to start, and remember you have already paid for it.

This site give you full indication of where the centers are, go there and register, it will be a very nice surprise for you to discover how many things you can do with your own money.

If you have kids and you are eligible you can apply to CCTB, look at this site for more information and to discover if you can apply.

Finally in Ottawa there are many languages spoken, but the official ones are English and French.

English is a must, and you must be able to communicate using it, but also French is very often request.

If you have wife or kids, it could make sense to help them in learning the languages, there are different programs that could help you, some private and very expensive, some from the government that are really affordable. Information at this site for a locator tool.

I think this is almost all about the basics, only missed things are the obvious one which is transportations, distance here are huge, so you cannot walk everywhere unless you have the whole day free. Using buses is a good way to move, just note that the monthly pass is not cheap, so unless you plan to use it EVERY day, it doesn't make sense.

Car are cheap as well, I mean by a basic one, what is not cheap is the insurance, specially the first year. But if you have kids and you want to live the city and the place around it, having a car is a must. Remember you must take the Ontario Driving license, yours is not valid here, and the examinants are really ... fussy. Web site for information here.

Keep also in mind that bicycle are also a good alternative, and almost everywhere you will see special path for them, so make sense to use them given cycling is good for health good for traffic and pollution.

Ok that is really all.

## 9. What to do to survive the winter

Let us go back to my direct experience. First thing we did, was decide that we will not be stop by the cold. Second I don't want to die under the snow so we pay for the snowplowing service.

Then, I love snow, and I like skating, skiing so for me no problem, I have a lot to do, different story for the rest of the family, but I did try to keep them with me, and at the end it worked out quite well.

At the end of the winter, we all were able to go skating around, and I can go skiing with my son that was enjoying it a lot.

Do not miss the skating on the canal or the winterlude festival, is a very god way to learn a different approach to the crazy cold it will invest you.

Winter is long, and I was aware I was not going to notice it also because the work, my kids will have to go to school so no issue as well, but what about my wife? She come here without a job, which could be nice for some time, but then it could become really ... boring, depressing and frustrating.

We decide, actually she decide, to look for something she could like, she start to look around and she found a job she likes in less then a week. Right now she goes to work every day at 6AM, which for me is crazy, but for her is ok because she loves it. Winter or summer no difference, this is important, really important.

SO advice is if you come here with your wife (or husband), help him to find a job, spending the day at home waiting for you is not only stupid, is dangerous.

Another point is, people don't go out when is cold, if you want to have friends you must call them and see them in the houses or pubs, if you organize the better.

This is a difference we have from much European country, where you have people gathering in the street, then moving somewhere despite the weather, and sometime just jumping inside your house uninvited.

## 10. After one year what is the balance?

Is not possible to do a balance after a year only. What I can say is that, so far so good, not yet fully able to understand the country or the people, but I feel more at home here then in many other place I had live in the past. It was fun for me to see during the Olympic Games my family cheer for Canada as they were doing for Italy, was a very small signal but really meaningful of how the Canadian spirit can take you.

Да, это был танец "Создание презентаций программы скачать"смерти, смерти, искавшей только японцев.

Представьте себе унизительный "Грот вершители судеб скачать альбом"брак помещика-аристократа с дочерью его холопа или знатной дамы с "Скачать песню на день рождения маме"ее безродным лакеем и подумайте, какое возмущение, какой скандал вызовет этот редкий "Ms office 2010 кряк"случай.

И без замешательства я понял, что перед переносом Лобачевский впечатал в мой мозг "Андроид маркет скачать приложения бесплатно"знание немецкого и французского.

Он не желал укрываться от ветра.

В чемодане "Скачать песню песня голубь"находилась белая рубаха, костюм неопределенного цвета и соответствующий галстук.

Возможно, здесь и нет "Скачать песни елены терлеевой"Амбри, зато именно тут "Сокровища валькирии скачать книгу"и проживав Небопа.

Однако такое "Скачать игру машини"решение было теперь в порядке "Лучшие книги сталкер"вещей.

Но я лично считаю, что здесь дело "Скачать минусовку фильм фильм фильм"не обошлось без магии!

Ничего хорошего "Книги скачать бесплатно lib"там нет, заметил африт.

Я смутно, "черные кредиты"словно во "Скачать бесплатно книги жюль верна"сне, помню раскаты грома, трели флейт "Скачать чит на крд в crossfire"и огненные иероглифы молний среди туч над "Картинки девушками на аву скачать"горами.

Забудем о белом человеке, который бежал быстрее лучших бегунов нашего племени.

Ultimo aggiornamento Venerdì 17 Maggio 2013 00:52

Altri Articoli...

JPAGE_CURRENT_OF_TOTAL

### Connecting from

Show location on Map

Location: UNITED STATES

### Who's Online

25 visitatori online

### My next events

GIU
2

02.06.2013
Festa della Repubblica -

AGO
15

15.08.2013
Assunzione -

NOV
1

01.11.2013
Ognissanti -

No event found.