Home ?????? ???????? ??????? MySQL


Planet MySQL
Planet MySQL - https://planet.mysql.com

  • How to Install MySQL Enterprise Edition on Docker and Monitor it with MySQL Enterprise Monitor?
    Introduction Before I talk about installation of MySQL inside docker, it's more important to knowwhat is Docker?- Docker is a tool designed to create , deploy ,and run an application any where.-It allow us to package up application with all requirements such as libraries and other dependencies and ship it all as a PACKAGE.who uses Docker?Developer : Docker enables developer to develop application without spending much time on IT infrastructure.Sysadmin :-Docker enables sysadmin to streamline the software delivery, such as develop and deploy bug fixes and new features without any roadblock. Enterprise :-Docker works in the cloud , on premise ,and supports both traditional and micro services deployments.why Docker?-Easily adapts to your working environment.-Simple to use.- Eliminate friction in development life cycle.More info at :- https://docs.docker.com/engine/docker-overview/ Let's Install MySQL Enterprise Edition 8.0.19 with Docker WarningThe MySQL Docker images maintained by the MySQL team are built specifically for Linux platforms. Other platforms are not supported, and users using these MySQL Docker images on them are doing so at their own risk. See the discussion here for some known limitations for running these containers on non-Linux operating systems. More about MySQL Enterprise Edition:- https://www.mysql.com/products/enterprise/ Step 1 :- Download MySQL EE binaries from MOS portal->patch & Updates Step 2:- Download the Binaries by clicking on product name and unzip to obtain tarball inside.               See at step-1 in yellow color. Step 3:-Load the image by running below command              # docker load -i mysql-enterprise-server-8.0.19.tar Suppose you have downloaded mysql-enterprise-server-8.0.19.tar file into windows laptop then push it into Linux Machine where Docker is running, then go to the directory and load the  mysql-enterprise-server-8.0.19.tar file. Step 4:-  to verify #docker images Step 5:- Starting MySQL Server Instance #docker run --name MySQLEnterpriseContainer -d  -p 3306:3306  mysql/enterprise-server:8.0 The --name option, for supplying a custom name for your server container, is optional; if no container name is supplied, a random one is generated. mysql/enterprise-server:8.0 :- image_name:tag Note:- Don’t give full version 8.0.19. TAG has to be 8.0 or else below error will generated. Step 6:- To verify #docker ps Step 7:- Get Random Password by typing below command #docker logs MySQLEnterpriseContainer 2>&1 | grep GENERATED Step 8:-Connecting to MySQL Server from within the Container docker exec -it MySQLEnterpriseContainer mysql -uroot –p Enter Generated Password : HOpnuMIxibVYMijv3syRYK4KjEc Step 9:-Change Temporary Password ALTER USER 'root'@'localhost' IDENTIFIED BY 'MySQL8.0'; Step 10:- Change Create Table and Insert Records to test... Create database test; Create table test.sales(Empname Varchar(20), CountryName Varchar(20)); Insert into test.sales select 'Ram','Delhi'; Insert into test.sales select 'Radha','Delhi'; Insert into test.sales select 'Rakesh','Mumbai'; Insert into test.sales select 'Rajesh','Mumbai'; Extra Commands to play... TO Get into the Docker Container to find the data directory , base directory , executing linux bash commands etc. shell> docker exec -it MySQLEnterpriseContainer bash bash-4.2# ls /var/lib/mysql To stop the MySQL Server container we have created, use this command: docker stop MySQLEnterpriseContainer docker start MySQLEnterpriseContainer docker restart MySQLEnterpriseContainer To delete the MySQL container, stop it first, and then use the docker rm command: docker stop MySQLEnterpriseContainer docker rm MySQLEnterpriseContainer Accessing Docker MySQL Database from Physical Host through MySQL Workbench Tool Assume MySQL Enterprise Workbench is installed on Windows Machine : Docker MySQL is Installed on Docker Host :  Step 1:- Make Sure MySQL is able to communicate with remote host.update mysql.user set host='%' where user='root'; flush privileges; Step 2:- Make sure Docker MySQL port is published out side hosts.                  see Step 05  -p 3306:3306 Step 3:-Connect MySQL Enterprise Workbench to Docker MySQL DB.               Monitoring Docker MySQL Database from MySQL Enterprise Monitor Assume MySQL Enterprise Monitor is istalled on Windows Machine and already monitoring many On-premises DB and Cloud Instances and NOW we want to also monitor MySQL DB which is running in DOCKER. Assume MySQL Enterprise Workbench is installed on Windows Machine : Docker MySQL is Installed on Docker Host : Login to MySQL Enterprise Monitor-->Add Instances. To Monitor SQL Statements:- Know More about MySQL Enterprise Monitor:- https://dev.mysql.com/doc/mysql-monitor/8.0/en/mem-getting-started.html Conclusion:- Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. Running MySQL is quite easy when ever environment demands Running multiple MySQL instances into single server DOCKER is best fit.

  • InnoDB Flushing in Action for Percona Server for MySQL
    As the second part of the earlier post Give Love to Your SSDs – Reduce innodb_io_capacity_max! we wanted to put together some concepts on how InnoDB flushing works in recent Percona Server for MySQL versions (8.0.x prior to 8.0.19, or 5.7.x). It is important to understand this aspect of InnoDB in order to tune it correctly. This post is a bit long and complex as it goes very deep into some InnoDB internals. InnoDB internally handles flush operations in the background to remove dirty pages from the buffer pool. A dirty page is a page that is modified in memory but not yet flushed to disk. This is done to lower the write load and the latency of the transactions. Let’s explore the various sources of flushing inside InnoDB. Idle Flushing We already discussed the idle flushing in the previous post mentioned above. When there are no write operations, which means the LSN isn’t moving, InnoDB flushes dirty pages at the innodb_io_capacity rate. Dirty Pages Percentage Flushing This source of flushing is a slightly modified version of the old InnoDB flushing algorithm used years ago. If you have been around MySQL for a while, you probably don’t like this algorithm. The algorithm is controlled by these variables: innodb_io_capacity (default value of 200) innodb_max_dirty_pages_pct (default value of 75) innodb_max_dirty_pages_pct_lwm (default value of 0 in 5.7 and 10 above 8.0.3) If the ratio of dirty pages over the total number of pages in the buffer pool is higher than the low water mark (lwm), InnoDB flushes pages at a rate proportional to the actual percentage of dirty pages over the value of Innodb_max_dirty_pages_pct multiplied by innodb_io_capacity. If the actual dirty page percentage is higher than Innodb_max_dirty_pages_pct, the flushing rate is capped at innodb_io_capacity.   The main issue with this algorithm is that it is not looking at the right thing. As a result, transaction processing may often freeze for a flush storm because the max checkpoint age is reached. Here’s an example from a post written by Vadmin Tkachenko back in 2011: InnoDB Flushing: a lot of memory and slow disk. TPCC New order transaction over time We can see a sharp drop in NOTP (New Order Transaction per second) at time = 265, and that’s because InnoDB reached the maximum checkpoint age and had to furiously flush pages. This is often called a flush storm. These storms freeze the write operations and are extremely bad for the database operation. For more details about flush storms, please see InnoDB Flushing: Theory and solutions. Free List Flushing To speed up reads and page creation, InnoDB tries to always have a certain number of free pages in each buffer pool instance. Without some free pages, InnoDB could need to flush a page to disk before it can load a new one. This behavior is controlled by another poorly understood variable: innodb_lru_scan_depth. This is a pretty bad name for what the variable controls. Although the name makes sense if you look at the code, for a normal DBA the name should be innodb_free_pages_per_pool_target. At regular intervals, the oldest pages in the LRU list of each buffer pool instance is scanned (hence the name), and pages are freed up to the variable value. If one of these pages is dirty, it will be flushed before it is freed. Adaptive Flushing The adaptive flushing algorithm was a major improvement to InnoDB and it allowed MySQL to handle much heavier write load in a decent manner. Instead of looking at the number of dirty pages like the old algorithm, the adaptive flushing looks at what matters: the checkpoint age. The first adaptive algorithm that we are aware of came from Yasufumi Kinoshita in 2008 while he was working at Percona. The InnoDB plugin 1.0.4 integrated similar concepts, and eventually Percona removed its flushing algorithm because the upstream one was doing a good job. The following description is valid for Percona Server for MySQL 8.0.18-. While we were busy writing this post, Oracle released 8.0.19 which introduces significant changes to the adaptive flushing code. It looks like a good opportunity for a follow-up post in the near future… Some Background But let’s first take a small step back to put together some concepts. InnoDB stores rows in pages of normally 16KB. These pages are either on disk, in the data files, or in memory in the InnoDB buffer pool. InnoDB only modifies pages in the buffer pool. Pages in the buffer pool may be modified by queries, and then they become dirty. At commit, the page modifications are written to the redo log, the InnoDB log files. After the write, the LSN (last sequence number) is increased. The dirty pages are not flushed back to disk immediately and are kept dirty for some time. The delayed page flushing is a common performance hack. The way InnoDB flushes its dirty pages is the focus of this post. Let’s now consider the structure of the InnoDB Redo Log. The InnoDB redo log files form a ring buffer The InnoDB log files form a ring buffer containing unflushed modifications. The above figure shows a crude representation of the ring buffer. The Head points to where InnoDB is currently writing transactional data. The Tail points to the oldest unflushed data modification. The distance between the Head and Tail is the checkpoint age. The checkpoint age is expressed in bytes. The size and the number of log files determine the max checkpoint age, and the max checkpoint age is approximately 80% of the combined size of the log files. Write transactions are moving the Head forward while page flushing is moving the Tail. If the Head moves too fast and there is less than 12.5% available before the Tail, transactions can no longer commit until some space is freed in the log files. InnoDB reacts by flushing at a high rate, an event called a Flush storm. Needless to say, flush storms should be avoided. How the Adaptive Flushing Algorithm Works The adaptive flushing algorithm is controlled by the following variables: innodb_adaptive_flushing (default ON) innodb_adaptive_flushing_lwm (default 10) innodb_flush_sync (default ON) innodb_flushing_avg_loops (default 30) innodb_io_capacity (default value of 200) innodb_io_capacity_max (default value of at least 2000) innodb_cleaner_lsn_age_factor (Percona server only, default high_checkpoint) The goal of the algorithm is to adapt the flushing rate (speed of the Tail) to the evolution of the checkpoint age (speed of the Head). It starts when the checkpoint age is above the adaptive flushing low water mark, by the default 10% of the max checkpoint age. Percona Server for MySQL offers two algorithms: Legacy and High Checkpoint. The Legacy algorithm is given below. Notice the power of 3/2 on the age factor and the 7.5 denominator. Legacy age factor   It also offers the High Checkpoint algorithm shown below: Percona High-checkpoint age factor This time the age power factor is 5/2 and the denominator is 700.5. It is important to note that in both equations, innodb_io_capacity (ioCap) appears as the denominator in a ratio with innodb_io_capacity_max (ioCapMax). If we plot both equations together, we have: Flushing pressure for the legacy and high-checkpoint algorithm The graph has been generated for a ratio ioCapMax/ioCap of 10 like with the default values. The Percona High Checkpoint starts slowly but then it increases rapidly. This allows for more dirty pages (see the post we previously discussed) and it is normally good for performance. The returned percentage value can be much higher than 100. Average Over Time So far, only the checkpoint age was used. While this is good, the goal of the algorithm is to flush pages in a way that the tail of the redo log ring buffer moves at about the same speed at the head. Approximately every innodb_flushing_avg_loops second, the rate of pages flushed and the progression of the head of the redo log is measured and the new value is averaged with the previous one. The goal here is to give some inertia to the algorithm, to damp the changes. A higher value of innodb_flushing_avg_loops makes the algorithm slower to react, while a smaller value makes it more reactive. Let’s call these quantities avgPagesFlushed and avgLsnRate. Pages to Flush for the avgLsnRate Based on the avgLsnRate value, InnoDB scans the oldest dirty pages in the buffer pool and calculates the number of pages that are at less than avgLsnRate of the tail. Since this is calculated every second, the number of pages returned is what needs to be flushed to maintain the correct rate. Let’s call this number pagesForLsnRate. Finally… We now have all the parts we need. The actual number of pages that will be flushed is given by: This quantity is then capped to ioCapMax. As you can see, pctOfIoCapToUse is multiplied by ioCap. If you look back at the equations giving pctOfIoCapToUse, they have ioCap at the denominator. The ioCap cancels out and the adaptive flushing algorithm is thus independent of innodb_io_capacity, as only innodb_io_capacity_max matters. There could also be more pages flushed if innodb_flush_neighbors is set. Can InnoDB Flush Pages at a Rate Higher Than innodb_io_capacity_max? Yes, if innodb_flush_sync is ON, InnoDB is authorized to go beyond if the max checkpoint age is reached or almost reached. If set to OFF, you’ll never go beyond innodb_io_capacity_max. If the latency of your read queries is critical, disabling innodb_flush_sync will prevent an IO storm, but at the expense of stalling the writes. InnoDB page_cleaner Error Message Error messages like [Note] InnoDB: page_cleaner: 1000ms intended loop took 4013ms. The settings might not be optimal. (flushed=1438 and evicted=0, during the time.) are rather frequent. They basically mean the hardware wasn’t able to flush innodb_io_capacity_max pages per second. In the above example, InnoDB tried to flush 1,438 pages but the spinning disk is able to perform only 360 per second. Thus, the flushing operation which was supposed to take 1 second ended up taking 4 seconds. If you really think the storage is able to deliver the number of write iops stated by innodb_io_capacity_max, then it may be one of these possibilities: innodb_io_capacity_max represents a number of pages to be flushed, flushing a page may require more than one IO, especially when the number of tablespaces is large. A spike of read IOPs has competed with the flushing. The device had a write latency spike. Garbage collection on SSDs can cause that, especially if the SSD is rather full. The doublewrite buffer had contention. Try Percona Server for MySQL with the parallel doublewrite buffer feature. Do you have enough page cleaners to fully maximize your hardware? Tuning InnoDB Now that we understand how InnoDB flushes dirty pages to disk, the next obvious step is to tune it. InnoDB tuning will be covered in a follow-up post, so stay tuned.

  • Automatic Schema Synchronization in NDB Cluster 8.0: Part 2
    In part 1, we took a brief, high-level look at the various protocols and mechanisms used to keep the Data Dictionary (DD) of MySQL servers connected to a MySQL Cluster in synchronization with each other and with the NDB Dictionary.… Tweet Share

  • Troubleshooting an OLAP system on InnoDB
    As a part of Mydbops Consulting we have a below problem statement from one of our client. “We have a high powered server for reporting which in turn powers our internal dashboard for viewing the logistics status.Even with a high end hardware, we had a heavy CPU usage and which in turn triggers spikes in replication lag and slowness. Below is the hardware configuration.“ OS : Debian 9 (Stretch) CPU : 40 RAM : 220G (Usable) Disk : 3T SSD with 80K sustained IOPS. MySQL : 5.6.43-84.3-log Percona Server (GPL) Datasize : 2.2TB Below is the graph on CPU utilisation from Grafana. Since the work load is purely reporting(OLAP) we could observe a similar type of queries with different ranges. Below is the Execution plan of the query. It is a join query over 6 tables. Explain Plan: +----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+ | 1 | SIMPLE | cf | ref | PRIMARY,name_x | name_x | 103 | const | 1 | Using where; Using index | | 1 | SIMPLE | scf | ref | sip_idx,active_idx,flag_idx | flag_idx | 8 | logis.cf.flagId | 5820 | Using where | | 1 | SIMPLE | sre | ref | staId,sre_idx1,shelfId_hubId,updateDate_statusId_statId_idx,statId_statusId_x | sre_idx1 | 18 | logis.scf.sipId | 1 | Using index condition; Using where | | 1 | SIMPLE | scam | eq_ref | mappingId | mappingId | 8 | logis.scf.mapId | 1 | NULL | | 1 | SIMPLE | ssdm | ref | sipIdIdx,SDetailIdIdx | shipmentIdIdx | 17 | logis.sre.sipId | 1 | NULL | | 1 | SIMPLE | sd | eq_ref | PRIMARY,mrchIdIdx | PRIMARY | 8 | logis.ssdm.SDId | 1 | Using where | +----+-------------+------------------------------------------------- With the initial screening it looks normal as it’s a reporting (OLAP) query and its bound to run longer and this has started to bit our system resources (CPU) and the replication lag cause stale or obsolete data in internal dashboard. As the execution plan depicts the query is using index and the columns used are being perfectly indexed. Only with table “SCF” we could see a scan of “5820 “ , The index used here has a less cardinality. Now we should tweak the optimizer to choose the right index. Optimizer chooses the index based on the stats collected for the table and stored under mysql.innodb_table_stats and mysql.innodb_index_stats. . The default value of innodb_stats_persistent_sample_pages and innodb_stats_transient_sample_pages are 20 and 8 respectively, which is too low for our dataset, This works wells for smaller tables, but in our case tables are in few 100’s of GB. We increased the values below globally since its a dynamic variable by 10X approx. mysql> set global innodb_stats_persistent_sample_pages=200; Query OK, 0 rows affected (0.00 sec) mysql> set global innodb_stats_transient_sample_pages=100; Query OK, 0 rows affected (0.00 sec) Below is the definition from the official documentation on these variables, “The number of index pages to sample when estimating cardinality and other statistics for an indexed column, such as those calculated by ANALYZE TABLE. Increasing the value improves the accuracy of index statistics, which can improve the query execution plan, at the expense of increased I/O during the execution of ANALYZE TABLE for an InnoDB table.” Now we will have to force the index stats recalculation by running a “Analyze table table_name” on all the table involved in the query or else you can make variables persistent and invoke a DB restart to calculate stats for all the tables , we chose the first method since its less harming. Let us review the execution plan now, we could see a reduced row scans and better index usage with the optimiser as below: +----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+ | 1 | SIMPLE | cf | ref | PRIMARY,name_x | name_x | 103 | const | 1 | Using where; Using index | | 1 | SIMPLE | sre | ref | staId,sre_idx1,shelfId_hubId,updateDate_statusId_statId_idx,statId_statusId_x | shelfId_hubId | 9 | const | 2936 | Using where | | 1 | SIMPLE | scf | ref | sip_idx,active_idx,flag_idx | sip_idx | 22 | logis.sre.shipmentId | 1 | Using index condition; Using where | | 1 | SIMPLE | scam | eq_ref | mappingId | mappingId | 8 | logis.scf.mappingId | 1 | NULL | | 1 | SIMPLE | ssdm | ref | shipmentIdIdx,sellerDetailIdIdx | sipIdIdx | 17 | logis.sre.shipmentId | 1 | NULL | | 1 | SIMPLE | sd | eq_ref | PRIMARY,merchantIdIdx | PRIMARY | 8 | logis.ssdm.sellerDetailId | 1 | Using where | +----+-------------+-------+--------+--------------------------------------------------------------------------------+---------------+---------+---------------------------------+------+------------------------------------+ Note the new index plan is applicable for the new incoming queries. Within a short span of time the CPU usage has dropped down drastically there is huge boost in performance, please find the graph below. Now the dashboards are faster than ever before. Key Takeaways: Below are points to note while setting this variable. Too high value can result in longer time for stats calculation . Too low can have inaccurate stats and leads to a situation discussed above As the table grows, InnoDB allegedly re-ANALYZEs ie., re-calculates stats after 10% growth, so no manual action needed. Most tables have decent stats from sampling 20 pages (Smaller tables) Tables with uneven distribution won’t benefit from changing the ’20’, to tackle that we have “Histograms” from MySQL 8.0

  • Moving from MySQL 5.7 to MySQL 8.0 - What You Should Know
    April 2018 is not just a date for the MySQL world. MySQL 8.0 was released there, and more than 1 year after, it’s probably time to consider migrating to this new version. MySQL 8.0 has important performance and security improvements, and, as in all migration to a new database version, there are several things to take into account before going into production to avoid hard issues like data loss, excessive downtime, or even a rollback during the migration task. In this blog, we’ll mention some of the new MySQL 8.0 features, some deprecated stuff, and what you need to keep in mind before migrating. What’s New in MySQL 8.0? Let’s now summarize some of the most important features mentioned in the official documentation for this new MySQL version. MySQL incorporates a transactional data dictionary that stores information about database objects. An atomic DDL statement combines the data dictionary updates, storage engine operations, and binary log writes associated with a DDL operation into a single, atomic transaction. The MySQL server automatically performs all necessary upgrade tasks at the next startup to upgrade the system tables in the mysql schema, as well as objects in other schemas such as the sys schema and user schemas. It is not necessary for the DBA to invoke mysql_upgrade. It supports the creation and management of resource groups, and permits assigning threads running within the server to particular groups so that threads execute according to the resources available to the group.  Table encryption can now be managed globally by defining and enforcing encryption defaults. The default_table_encryption variable defines an encryption default for newly created schemas and general tablespace. Encryption defaults are enforced by enabling the table_encryption_privilege_check variable.  The default character set has changed from latin1 to utf8mb4. It supports the use of expressions as default values in data type specifications. This includes the use of expressions as default values for the BLOB, TEXT, GEOMETRY, and JSON data types. Error logging was rewritten to use the MySQL component architecture. Traditional error logging is implemented using built-in components, and logging using the system log is implemented as a loadable component. A new type of backup lock permits DML during an online backup while preventing operations that could result in an inconsistent snapshot. The new backup lock is supported by LOCK INSTANCE FOR BACKUP and UNLOCK INSTANCE syntax. The BACKUP_ADMIN privilege is required to use these statements. MySQL Server now permits a TCP/IP port to be configured specifically for administrative connections. This provides an alternative to the single administrative connection that is permitted on the network interfaces used for ordinary connections even when max_connections connections are already established. It supports invisible indexes. This index is not used by the optimizer and makes it possible to test the effect of removing an index on query performance, without removing it. Document Store for developing both SQL and NoSQL document applications using a single database. MySQL 8.0 makes it possible to persist global, dynamic server variables using the SET PERSIST command instead of the usual SET GLOBAL one.  MySQL Security and Account Management As there are many improvements related to security and user management, we'll list them in a separate section. The grant tables in the mysql system database are now InnoDB tables.  The new caching_sha2_password authentication plugin is now the default authentication method in MySQL 8.0. It implements SHA-256 password hashing, but uses caching to address latency issues at connect time. It provides more secure password encryption than the mysql_native_password plugin, and provides better performance than sha256_password. MySQL now supports roles, which are named collections of privileges. Roles can have privileges granted to and revoked from them, and they can be granted to and revoked from user accounts.  MySQL now maintains information about password history, enabling restrictions on reuse of previous passwords.  It enables administrators to configure user accounts such that too many consecutive login failures due to incorrect passwords cause temporary account locking.  InnoDB enhancements As the previous point, there are also many improvements related to this topic, so we'll list them in a separate section too. The current maximum auto-increment counter value is written to the redo log each time the value changes, and saved to an engine-private system table on each checkpoint. These changes make the current maximum auto-increment counter value persistent across server restarts When encountering index tree corruption, InnoDB writes a corruption flag to the redo log, which makes the corruption flag crash-safe. InnoDB also writes in-memory corruption flag data to an engine-private system table on each checkpoint. During recovery, InnoDB reads corruption flags from both locations and merges results before marking in-memory table and index objects as corrupt. A new dynamic variable, innodb_deadlock_detect, may be used to disable deadlock detection. On high concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock. At times, it may be more efficient to disable deadlock detection and rely on the innodb_lock_wait_timeout setting for transaction rollback when a deadlock occurs. InnoDB temporary tables are now created in the shared temporary tablespace, ibtmp1. mysql system tables and data dictionary tables are now created in a single InnoDB tablespace file named mysql.ibd in the MySQL data directory. Previously, these tables were created in individual InnoDB tablespace files in the mysql database directory. By default, undo logs now reside in two undo tablespaces that are created when the MySQL instance is initialized. Undo logs are no longer created in the system tablespace. The new innodb_dedicated_server variable, which is disabled by default, can be used to have InnoDB automatically configure the following options according to the amount of memory detected on the server: innodb_buffer_pool_size, innodb_log_file_size, and innodb_flush_method. This option is intended for MySQL server instances that run on a dedicated server.  Tablespace files can be moved or restored to a new location while the server is offline using the innodb_directories option.  Now, let’s take a look at some of the features that you shouldn’t use anymore in this new MySQL version. What is Deprecated in MySQL 8.0? The following features are deprecated and will be removed in a future version. The utf8mb3 character set is deprecated. Please use utf8mb4 instead. Because caching_sha2_password is the default authentication plugin in MySQL 8.0 and provides a superset of the capabilities of the sha256_password authentication plugin, sha256_password is deprecated. The validate_password plugin has been reimplemented to use the server component infrastructure. The plugin form of validate_password is still available but is deprecated. The ENGINE clause for the ALTER TABLESPACE and DROP TABLESPACE statements. The PAD_CHAR_TO_FULL_LENGTH SQL mode. AUTO_INCREMENT support is deprecated for columns of type FLOAT and DOUBLE (and any synonyms). Consider removing the AUTO_INCREMENT attribute from such columns, or convert them to an integer type. The UNSIGNED attribute is deprecated for columns of type FLOAT, DOUBLE, and DECIMAL (and any synonyms). Consider using a simple CHECK constraint instead for such columns. FLOAT(M,D) and DOUBLE(M,D) syntax to specify the number of digits for columns of type FLOAT and DOUBLE (and any synonyms) is a nonstandard MySQL extension. This syntax is deprecated. The nonstandard C-style &&, ||, and ! operators that are synonyms for the standard SQL AND, OR, and NOT operators, respectively, are deprecated. Applications that use the nonstandard operators should be adjusted to use the standard operators. The mysql_upgrade client is deprecated because its capabilities for upgrading the system tables in the mysql system schema and objects in other schemas have been moved into the MySQL server. The mysql_upgrade_info file, which is created data directory and used to store the MySQL version number. The relay_log_info_file system variable and --master-info-file option are deprecated. Previously, these were used to specify the name of the relay log info log and master info log when relay_log_info_repository=FILE and master_info_repository=FILE were set, but those settings have been deprecated. The use of files for the relay log info log and master info log has been superseded by crash-safe slave tables, which are the default in MySQL 8.0. The use of the MYSQL_PWD environment variable to specify a MySQL password is deprecated. And now, let’s take a look at some of the features that you must stop using in this MySQL version. What Was Removed in MySQL 8.0? The following features have been removed in MySQL 8.0. The innodb_locks_unsafe_for_binlog system variable was removed. The READ COMMITTED isolation level provides similar functionality. Using GRANT to create users. Instead, use CREATE USER. Following this practice makes the NO_AUTO_CREATE_USER SQL mode immaterial for GRANT statements, so it too is removed, and an error now is written to the server log when the presence of this value for the sql_mode option in the options file prevents mysqld from starting. Using GRANT to modify account properties other than privilege assignments. This includes authentication, SSL, and resource-limit properties. Instead, establish such properties at account-creation time with CREATE USER or modify them afterward with ALTER USER. IDENTIFIED BY PASSWORD 'auth_string' syntax for CREATE USER and GRANT. Instead, use IDENTIFIED WITH auth_plugin AS 'auth_string' for CREATE USER and ALTER USER, where the 'auth_string' value is in a format compatible with the named plugin.  The PASSWORD() function. Additionally, PASSWORD() removal means that SET PASSWORD ... = PASSWORD('auth_string') syntax is no longer available. The old_passwords system variable. The FLUSH QUERY CACHE and RESET QUERY CACHE statements. These system variables: query_cache_limit, query_cache_min_res_unit, query_cache_size, query_cache_type, query_cache_wlock_invalidate. These status variables: Qcache_free_blocks, Qcache_free_memory, Qcache_hits, Qcache_inserts, Qcache_lowmem_prunes, Qcache_not_cached, Qcache_queries_in_cache, Qcache_total_blocks. These thread states: checking privileges on cached query, checking query cache for a query, invalidating query cache entries, sending cached result to the client, storing result in the query cache, Waiting for query cache lock. The tx_isolation and tx_read_only system variables have been removed. Use transaction_isolation and transaction_read_only instead. The sync_frm system variable has been removed because .frm files have become obsolete. The secure_auth system variable and --secure-auth client option have been removed. The MYSQL_SECURE_AUTH option for the mysql_options() C API function was removed. The log_warnings system variable and --log-warnings server option have been removed. Use the log_error_verbosity system variable instead. The global scope for the sql_log_bin system variable was removed. sql_log_bin has session scope only, and applications that rely on accessing @@GLOBAL.sql_log_bin should be adjusted. The unused date_format, datetime_format, time_format, and max_tmp_tables system variables are removed. The deprecated ASC or DESC qualifiers for GROUP BY clauses are removed. Queries that previously relied on GROUP BY sorting may produce results that differ from previous MySQL versions. To produce a given sort order, provide an ORDER BY clause. The parser no longer treats \N as a synonym for NULL in SQL statements. Use NULL instead. This change does not affect text file import or export operations performed with LOAD DATA or SELECT ... INTO OUTFILE, for which NULL continues to be represented by \N.  The client-side --ssl and --ssl-verify-server-cert options have been removed. Use --ssl-mode=REQUIRED instead of --ssl=1 or --enable-ssl. Use --ssl-mode=DISABLED instead of --ssl=0, --skip-ssl, or --disable-ssl. Use --ssl-mode=VERIFY_IDENTITY instead of --ssl-verify-server-cert options. The mysql_install_db program has been removed from MySQL distributions. Data directory initialization should be performed by invoking mysqld with the --initialize or --initialize-insecure option instead. In addition, the --bootstrap option for mysqld that was used by mysql_install_db was removed, and the INSTALL_SCRIPTDIR CMake option that controlled the installation location for mysql_install_db was removed. The mysql_plugin utility was removed. Alternatives include loading plugins at server startup using the --plugin-load or --plugin-load-add option, or at runtime using the INSTALL PLUGIN statement. The resolveip utility is removed. nslookup, host, or dig can be used instead. There are a lot of new, deprecated, and removed features. You can check the official website for more detailed information. Considerations Before Migrating to MySQL 8.0 Let’s mention now some of the most important things to consider before migrating to this MySQL version. Authentication Method As we mentioned, caching_sha2_password is not the default authentication method, so you should check if your application/connector supports it. If not, let’s see how you can change the default authentication method and the user authentication plugin to ‘mysql_native_password’ again. To change the default  authentication method, edit the my.cnf configuration file, and add/edit the following line: $ vi /etc/my.cnf [mysqld] default_authentication_plugin=mysql_native_password To change the user authentication plugin, run the following command with a privileged user: $ mysql -p ALTER USER ‘username’@’hostname’ IDENTIFIED WITH ‘mysql_native_password’ BY ‘password’; Anyway, these changes aren’t a permanent solution as the old authentication could be deprecated soon, so you should take it into account for a future database upgrade. Also the roles are an important feature here. You can reduce the individual privileges assigning it to a role and adding the corresponding users there.  For example, you can create a new role for the marketing and the developers teams: $ mysql -p CREATE ROLE 'marketing', 'developers'; Assign privileges to these new roles: GRANT SELECT ON *.* TO 'marketing'; GRANT ALL PRIVILEGES ON *.* TO 'developers'; And then, assign the role to the users: GRANT 'marketing' TO 'marketing1'@'%'; GRANT 'marketing' TO 'marketing2'@'%'; GRANT 'developers' TO 'developer1'@'%'; And that’s it. You’ll have the following privileges: SHOW GRANTS FOR 'marketing1'@'%'; +-------------------------------------------+ | Grants for marketing1@% | +-------------------------------------------+ | GRANT USAGE ON *.* TO `marketing1`@`%` | | GRANT `marketing`@`%` TO `marketing1`@`%` | +-------------------------------------------+ 2 rows in set (0.00 sec) SHOW GRANTS FOR 'marketing'; +----------------------------------------+ | Grants for marketing@% | +----------------------------------------+ | GRANT SELECT ON *.* TO `marketing`@`%` | +----------------------------------------+ 1 row in set (0.00 sec) Character Sets As the new default character set is utf8mb4, you should make sure you’re not using the default one as it’ll change. To avoid some issues, you should specify the character_set_server and the collation_server variables in the my.cnf configuration file. $ vi /etc/my.cnf [mysqld] character_set_server=latin1 collation_server=latin1_swedish_ci MyISAM Engine The MySQL privilege tables in the MySQL schema are moved to InnoDB. You can create a table engine=MyISAM, and it will work as before, but coping a MyISAM table into a running MySQL server will not work because it will not be discovered. Partitioning There must be no partitioned tables that use a storage engine that does not have native partitioning support. You can run the following query to verify this point. $ mysql -p SELECT TABLE_SCHEMA, TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE NOT IN ('innodb', 'ndbcluster') AND CREATE_OPTIONS LIKE '%partitioned%'; If you need to change the engine of a table, you can run: ALTER TABLE table_name ENGINE = INNODB; Upgrade Check As a last step, you can run the mysqlcheck command using the check-upgrade flag to confirm if everything looks fine. $ mysqlcheck -uroot -p --all-databases --check-upgrade Enter password: mysql.columns_priv OK mysql.component OK mysql.db OK mysql.default_roles OK mysql.engine_cost OK mysql.func OK mysql.general_log OK mysql.global_grants OK mysql.gtid_executed OK mysql.help_category OK mysql.help_keyword OK mysql.help_relation OK mysql.help_topic OK mysql.innodb_index_stats OK mysql.innodb_table_stats OK mysql.password_history OK mysql.plugin OK mysql.procs_priv OK mysql.proxies_priv OK mysql.role_edges OK mysql.server_cost OK mysql.servers OK mysql.slave_master_info OK mysql.slave_relay_log_info OK mysql.slave_worker_info OK mysql.slow_log OK mysql.tables_priv OK mysql.time_zone OK mysql.time_zone_leap_second OK mysql.time_zone_name OK mysql.time_zone_transition OK mysql.time_zone_transition_type OK mysql.user OK sys.sys_config OK world_x.city OK world_x.country OK world_x.countryinfo OK world_x.countrylanguage OK There are several things to check before performing the upgrade. You can check the official MySQL documentation for more detailed information. Upgrade Methods There are different ways to upgrade MySQL 5.7 to 8.0. You can use the upgrade in-place or even create a replication slave in the new version, so you can promote it later.  But before upgrading, step 0 must be backing up your data. The backup should include all the databases including the system databases. So, if there is any issue, you can rollback asap.  Another option, depending on the available resources, can be creating a cascade replication MySQL 5.7 -> MySQL 8.0 -> MySQL 5.7, so after promoting the new version, if something went wrong, you can promote the slave node with the old version back. But it could be dangerous if there was some issue with the data, so the backup is a must before it. For any method to be used, it’s necessary a test environment to verify that the application is working without any issue using the new MySQL 8.0 version. Conclusion More than 1 year after the MySQL 8.0 release, it is time to start thinking to migrate your old MySQL version, but luckily, as the end of support for MySQL 5.7 is 2023, you have time to create a migration plan and test the application behavior with no rush. Spending some time in that testing step is necessary to avoid any issue after migrating it. Tags:  MySQL database upgrade upgrades migration

 Creative Commons