Campaign | Discussion | Preparation | Database

Preparing Your MySQL Database For Migration With Database Migration Service

Recently, we announced the new Database Migration Service (DMS) to make it easier to migrate databases to Google Cloud. DMS is an easy-to-use, serverless migration tool that provides minimal downtime database migration to Cloud SQL for MySQL (Preview) and Cloud SQL for PostgreSQL (available in Preview by request). 

In this post, we’ll cover some of the tasks you need to take to prepare your MySQL database for migration with DMS.

What types of migrations are supported?

When we talk about migrations, usually we either do an offline migration, or a minimal downtime migration using continuous data replication. With Database Migration Service (DMS) for MySQL, you can do both! You have an option for one-time migration or continuous migration.

Version support

DMS for MySQL supports source database versions 5.5, 5.6 5.7, or 8.0, and it supports migrating to the same version or one major version higher.  

Here are the possible migration paths for each version:

Google Cloud | MySQL Migration

When migrating to a different version than your source database, your source and destination databases may have different values for the sql_mode flag. The SQL mode defines what SQL syntax MySQL supports and what types of data validation checks it performs. For instance, the default SQL mode values are different between MySQL 5.6 and 5.7. 

As a result, with the default SQL modes in place, a date like 0000-00-00 would be valid in version 5.6 but would not be valid in version 5.7. Additionally, with the default SQL modes, there are changes to the behavior of GROUP_BY between version 5.6 and version 5.7. Check to ensure that the values for the sql_mode flag are set appropriately on your destination database.

You can learn more about  the sql_mode flag and what the different values mean in the MySQL documentation


Before you can proceed with the migration, there are a few prerequisites you need to complete. We have a quickstart that shows all the steps for migrating your database, but what we want to focus on in this post is what you need to do to configure your source database, and we’ll also briefly describe setting up a connection profile and configuring connectivity.

Configure your source database

There are several steps you need to take to configure your source database. Please note that depending on your current configuration, a restart on your source database may be necessary to apply the required configurations.

Stop DDL write operations

Before you begin to migrate data from the source database to the destination database, you must stop all Data Definition Language (DDL) write operations, if any are running on the source. This script can be used to verify whether any DDL operations were executed in the past 24 hours, or if there are any active operations in progress.

server_id system variable

One of the most important items to set up in your source database instance is the server_id system variable. If you are not sure what your current value is, you can check by running this on your mysql client:

SELECT @@GLOBAL.server_id;

The value displayed must be any value equal or greater than 1. If you are not sure about how to configure the server_id, you can look at this page. Although this value can be dynamically changed, replication is not automatically started when you change the variable unless you restart your server.

Global transaction ID (GTID) logging

The gtid_mode flag controls whether or not global transaction ID logging is enabled and what types of transactions the logs can contain. Make sure that gtid_mode is set to ON or OFF, as ON_PERMISSIVE and OFF_PERMISSIVE are not supported with DMS. 

To know which gtid_mode you have on your source database run the following command:

SELECT @@GLOBAL.gtid_mode;

If the value for gtid_mode is set to ON_PERMISSIVE or OFF_PERMISSIVE, when you are changing it, note that changes to the value can only be one step at a time. For example, if gtid_mode is set to ON_PERMISSIVE, you can change it to ON or OFF_PERMISSIVE, but not to OFF in a single step. 

Although the gtid_mode value can be dynamically changed without requiring a server reboot, it is recommended that you change it globally. Otherwise, it will only be valid for the session where the change occurred and it won’t have effect when you start the migration via DMS. You can learn more about gtid_mode in the MySQL documentation.

Database user account

The user account that you are using to connect to the source database needs to have these global privileges:


We recommend that you create a specific user for the purpose of migration, and you can temporarily leave the access to this database host as %. More information on creating a user can be found here.

The password of the user account used to connect to the source database must not exceed 32 characters in length. This is an issue specific to MySQLreplication. For more information about the MySQL user password length limitation, see MySQL Bug #43439.

DEFINER clause

Because a MySQL migration job doesn’t migrate user data, sources that contain metadata defined by users with the DEFINER clause will fail when invoked on the new Cloud SQL replica, as the users don’t yet exist there.

You can identify which DEFINER values exist in your metadata by using these queries. Check if there are entries for either root%localhost or users that don’t exist in the target instance.





If your source database does contain this metadata you can do one of the following:

  • Update the DEFINER clause to INVOKER on your source MySQL instance prior to setting up your migration job.
  • Create the users on your target Cloud SQL instance before starting your migration job.
    1. Create a migration job without starting it. That is, choose Create instead of Create & Start.
    2. Create the users from your source MySQL instance on your target Cloud SQL instance using the Cloud SQL API or UI.
    3. Start the migration job from the migration job list or the specific job’s page.

Binary logging

Enable binary logging on your source database, and set retention to a minimum of 2 days. We recommend setting it to 7 days to minimize the likelihood of lost log position. You can learn more about binary logging in the MySQL documentation.


All tables, except tables in system databases, will use the InnoDB storage engine. If you need more information about converting to InnoDB, you can reference this documentation on converting tables from MyISAM to InnoDB.

Set up a connection profile

A connection profile represents all the information you need to connect to a data source. You can create a connection profile on its own or in the context of creating a specific migration job. Creating a source connection profile on its own is useful if the person who has the source access information is not the same person who creates the migration job. You can also reuse a source connection profile definition in multiple migration jobs.

Learn more about connection profiles and how to set them up in the documentation.

Configure connectivity

DMS offers several different ways that you can set up connectivity between the destination Cloud SQL database and your source database. 

There are four connectivity methods you can choose from:

The connectivity method you choose will depend on the type of source database, and whether it resides on-premises, in Google Cloud, or in another cloud provider. For a more in-depth look at connectivity, you can read this blog post.

Extra Resources

Now that you’ve learned how to prepare your MySQL database for migration, you can visit the DMS documentation to get started, or continue learning by reading these blog posts:

Try out DMS in the Google Cloud console. It’s available at no additional charge for native lift-and-shift migrations to Cloud SQL.

By Gabriela Ferrara. Source: Google Cloud Blog.

For enquiries, product placements, sponsorships, and collaborations, connect with us at We'd love to hear from you!

Our humans need coffee too! Your support is highly appreciated, thank you!

Previous Article
Waves | Connection | Neurons

Dataproc Hub Makes Notebooks Easier To Use For Machine Learning

Next Article

The Magic Of Distributed Joins In Cloud Spanner

Related Posts