How to make a database backup. MySQL Backup

If you store data in a MySQL database and it is important for you not to lose it, then you need to make backups regularly. This article will teach you how to quickly make backups through the console and restore them to a MySQL database. You can also transfer data to another server.

How to make a backup via the command line (using mysqldump)

If you have access to the server via the console (SSH), you can quickly make backups and deploy them back. This allows you to quickly create a database dump (a dump is a text version of the database) and restore them back, faster than through phpmyadmin, adminer, etc. A database dump will consist of SQL commands to create a copy of your database. Here is the dump command:

$ mysqldump --opt -u -p >

- Database username (possibly root)
- the password for your user, you can write it together if it is without any special characters and spaces, for example -proot, -ppassword
- name of your database
- name of the file where the backup will be saved
[--opt]- options for the mysqldump command, you can skip them and not write them

Let's say you have a Drupal database, and the username is root with the password password and the file name is backup.sql, then the command will be like this:

$ mysqldump -u root -ppassword DrupalDB > backup.sql

You can omit the password or the password contains spaces and other special characters (#!, -_), then you will have to enter the password separately and the command will be like this:

$ mysqldump -u root -p DrupalDB > backup.sql

You can also backup individual tables; to do this, you need to list the tables separated by a space, for example nodes users:

$ mysqldump -u root -p DrupalDB nodes users > backup.sql

In addition to backing up individual tables, you can back up several databases at once; to do this, you need to specify the --databases parameter; this will allow you to specify the required databases separated by a space:

$ mysqldump -u root -p --databases DrupalDB Drupal7 Drupal8 > backup.sql

If you want to migrate the entire MySQL server, you can copy all data using the --all-databases option:

$ mysqldump -u root -p --all-databases > alldb_backup.sql

The mysqldump command also has several useful options:

--add-drop-table- allows you to delete tables before deploying this backup (that is, SQL queries DROP TABLE will be added to the dump before CREATE TABLE of the same table).

--no-data- allows you to copy only the database structure without data, for example, it’s useful when you copy cache tables, which in Drupal can have hundreds of thousands of records that we don’t need.

--add-lock- allows you to add LOCK TABLES and UNLOCK TABLES. If the database size is large, then the tables can be locked for a long time.

How to backup a MySQL database with compression

Perhaps the most suitable option, because it can be compressed 10-20 times and backups from large databases become quite small in size. For compression we will use the gzip command:

$ mysqldump -u root -p DrupalDB | gzip -9 > backup.sql.gz

If you want to unzip a file (not restore the database, but simply unzip), then use this command:

$ gunzip backup.sql.gz

How to restore a MySQL database from a backup

To restore a database from a dump, you need a clean database; you can delete tables using adminer or phpmyadmin. If you used the --add-drop-table parameter, then the tables will be deleted and filled in and you do not need to delete anything first. Here is the command to restore the database from a dump:

$ mysql -u root -p DrupalDB< backup.sql

If you had a compressed backup, then use this command:

Gunzip< backup.sql.gz | mysql -u root -p DrupalDB

Backup and restore via Adminer (replacement for PhpMyAdmin)

Adminer is a replacement for PhpMyAdmin. It is the same in functionality, only made in the form of one small file, which is very convenient:

We simply copy the file to the root of our site and access it through the browser:

http://yoursite_name/adminer-4.2.1.php (you can rename it to adminer.php for convenience):

Now click Export (Dump) and upload the data. Moreover, we can avoid unloading cache data that we do not need:

To download the dump back, use the Import tab:

I would like to tell you in my article “Automatic backup of Mysql databases in Linux” how I made automatic backups of my databases in Mysql. Very useful information, especially if you don’t know this and need to make a backup.

1. Installing and configuring Mysql (I hope you have everything configured and ready to work, how are there databases for creating a backup), I will provide information in my previous topics, maybe someone will need it:

There are many more topics, just search the site if necessary.

2. We create the actual backup of all databases.

First, let's create a folder, all our backups will be there, for this:

# mkdir /home/captain/backup && cd /home/captain/backup

Create a backup using the command:

# mysqldump -u root -proot -h 127.0.0.1 --all-databases | gzip >

PS: Guys, just don’t confuse the signs (` -tilde, which is similar to a regular quotation mark ‘). The mark has tildes written there.

We make sure that the file has already been created, and make sure that everything is fine:

# ls -lah

This is not always correct! So you should always think about it, did I make a backup? You need to add this command to crontab for automatic backup of mysql data, this is done like this:

# vim /etc/crontab

This option is suitable, but there is a problem, it is executed as the root user. It is advisable not to do this. The correct way would be:

# crontab -e # Automatic backup of Mysql databases on Linux 00 00 * * * /usr/bin/mysqldump -u root -proot -h 127.0.0.1 --all-databases | gzip > /home/captain/backup/my_database_backup_`date +%mm%dd%yy`.sql.gz

With this, I complete my topic “Automatic backup of Mysql databases in Linux” on my website https://site

The following example shows how to perform a MySQL database backup using a GUI.

1. Right click in the window Conductor and select the option BackupCreate a database backup. A window will open Database Backup Wizard


2. On the General tab, specify the connection, the database to create the backup, and the name of the resulting file. Click Next.



3. Select the content to back up. Click Next.



4. Specify detailed options for creating a MySQL backup. Click Next.



5. Customize options Error Handling And Log file. Click Execute to create a MySQL backup.



6. Click Execute.

How to schedule a MySQL backup?

Sometimes you may need to set up automatic Creating MySQL database backups. To do this, you must first create a project file containing the backup settings using the MySQL Backup Wizard. Once you create a project file, you will schedule a Windows task that will run at a specific time or when a specific condition is met.



2. After selecting all the necessary backup options and saving the project, run Task Scheduler on Windows.

3. Select ActionCreate a simple task in the top menu.

4. Provide a name for the new task. Click Next.

5. Select one of the options to run the MySQL backup task. Click Next.

6. Select times and frequencies to run the task. Click Next.

7. Select Run the program and click Next.

8. Specify the path to the file dbforgemysql.com in the field Program or script. (The default path is "C:\Program Files\Devart\dbForge Studio for MySQL\dbforgemysql.com")



Also, don't forget to specify the following arguments in the field Add arguments:

/backup /connection:"User Id=root;Host=db;Port=3309;Database=sakila;Character Set=utf8;" /projectfile:"C:\Users\Documents\dbForge Studio for MySQL\Export\sakila.backup"

9. Check all settings again and if no changes are required, click Complete. Otherwise, click Back and make the necessary changes.



Our MySQL Backup Wizard will help you avoid data loss. In just a few simple steps, you can create copies of entire databases or just selected objects, and restore them in the future just as quickly and easily.

Hello readers and subscribers of the site! Today I decided to talk about the most important thing, about a site backup, thanks to which all site data can be restored at any time of the day or night in the same form as before.

It would seem, what is so important about such a topic? Just think, some kind of copy of the site, after all, we pay money for hosting and he is obliged to be responsible for the safety of the content of the sites served.

There is some truth in this, but no one is protected from force majeure and imagine your surprise when one day you go to your website and see a similar error message instead.

Do you know what’s the worst thing? It's that you haven't backed it up (files and database).

It is impossible to imagine that all your work and constant income ceased to exist due to such stupidity.

A simple example: you have been running a blog for several years, spending time every day writing articles and promoting it. An army of regular readers and subscribers has gathered around you, plus your site has already begun to make a profit, which allowed you to quit your job and become financially independent through the online network.

But for some reason, the server where the hard drive with your site is located stopped working, so you know hosting is just a virtual data storage, but in reality, somewhere on earth a special room was built with many modern computers and thousands of GB of memory.

You are paying for a piece of space on a remote hard drive. Now imagine that there was a fire in this room and all the data on the hard drive where your site was located was damaged.

The hosting provider will send you an apology and nothing more. Well, of course, you will have to start your career as an online entrepreneur from scratch. (

Who was more foresighted to take a backup copy of the site, restore the data on another hosting and forget about such an incident.

Personally, in my own skin, a few days ago I experienced the fear that I could lose everything in an instant when my hosting servers crashed and my blog was unavailable to visitors for some time.

Of course, I’m not a complete newbie and I know very well that all data needs to be copied from such incidents, but at that moment I hadn’t updated my blog backup for several months.

To be honest, it would be a shame to go back and write dozens of articles all over again. So think about it, and below I will show you in what ways a backup copy of site data is created.

Reasons why a site may disappear:

  • Error when editing site files. You can easily mess up the code or make a mistake.
  • Hosting down. There can be many reasons for this, starting with natural disasters and ending with the fact that the hosting itself allows itself to cheat clients. Anything can happen in life!
  • Hacker or virus attack on the site. Now there are a lot of attackers whose targets are ordinary webmasters and their profitable projects.

Backup of website files on hosting

Let's start with the fact that almost every hosting provider creates daily backups of all customer data.

And now the most interesting thing, in the next paragraph you need to set the period of time after which the plugin will automatically copy and send a backup copy of the database to your email.

I set the frequency to once a day and created a separate email account for this, so as not to clutter up my work address with archived copies of the blog database. As you can see, there is nothing complicated!

This is where I will end the post, I hope from it you will understand the pain and importance of periodically creating a copy of all site files and the MySQL database.

It’s better to spend a couple of minutes of time than several months restoring a project from web archives. Bye

Database Management System mysql and its fork mariadb are very popular. All thanks to open source and free. A huge number of projects on the Internet, and not only, use mysql/mariadb as a database server.

Regular backup of information is a reliable means of protecting against damage or loss. There are quite a few software tools for organizing backups. mysql. Let's look at one of the simplest methods.

Wonderful utility mysqldump allows you to make database backups mysql/mariadb on the fly, without stopping the server. With its help, we will make a copy of all databases. Archives older 10 days we will delete.

Sample script code:

#!/bin/bash DB_BACKUP="/backups/backup-mysql/`date +%Y-%m-%d`" DB_USER="root" DB_PASSWD="mysql-root-password" HN=`hostname | awk -F. "(print $1)"` # Create a directory to save the backup mkdir -p $DB_BACKUP # Deletes archives older than 10 days find /backups/backup-mysql/ -maxdepth 1 -type d -mtime +10 -exec rm -rf () \; # Make a copy of each server database mysql/mariadb for db in $(mysql --user=$DB_USER --password=$DB_PASSWD -e "show databases" -s --skip-column-names|grep -viE "(staging |performance_schema|information_schema)"); do mysqldump --user=$DB_USER --password=$DB_PASSWD --events --opt --single-transaction $db | bzip2 > "$DB_BACKUP/mysqldump-$HN-$db-$(date +%Y-%m-%d).sql.bz2"; done

#!/bin/bash

DB_BACKUP = "/backups/backup-mysql/`date +%Y-%m-%d`"

DB_USER = "root"

DB_PASSWD = "mysql-root-password"

HN = ` hostname | awk - F . "(print $1)" `

# Create a directory to save the backup copy

mkdir - p$DB_BACKUP

# Deletes archives older than 10 days

find / backups / backup - mysql / - maxdepth 1 - type d - mtime + 10 - exec rm - rf ( ) \ ;

# Make a copy of each mysql/mariadb server database

for db in $ (mysql -- user = $ DB_USER -- password = $ DB_PASSWD - e "show databases" - s -- skip - column - names | grep - viE "(staging|performance_schema|information_schema)") ;

do mysqldump -- user = $DB_USER -- password = $DB_PASSWD -- events -- opt -- single - transaction $db | bzip2 > "$DB_BACKUP/mysqldump-$HN-$db-$(date +%Y-%m-%d).sql.bz2";

55 1 * * * /usr/local/bin/backup - mysql. sh

The script can look much simpler if you use the parameter –all-databases instead of directly searching through all databases. However, the chosen approach makes it easier to work with copies of large databases.

And lastly, I almost forgot. Backups should not be stored on the same server for which they are intended.

This script assumes that the directory /backups is a connected, trusted remote resource.



Loading...
Top