Tuesday, May 07, 2024

Optimizing the Linux (OS) for Database



Optimizing the Linux (OS) for Database

Optimizing the operating system (OS) for a Database Server/MySQL server involves configuring various OS settings to improve performance, stability, and security.

  • File System: Use a file system that is optimized for database workloads. For Linux, ext4 or XFS are commonly used. Ensure that the file system is mounted with appropriate options for database performance, such as noatime and nodiratime.
  • I/O Scheduler: Use an I/O scheduler that is suitable for database workloads. For example, on Linux, you might choose the deadline or noop scheduler instead of the default cfq scheduler.
  • File System Cache: Adjust the file system cache settings to ensure that enough memory is available for caching frequently accessed data and reducing disk I/O.
  • Swappiness: Set the swappiness parameter to a lower value (e.g., 10) to reduce the likelihood of the OS swapping memory to disk, which can negatively impact database performance.
  • Kernel Parameters: Adjust kernel parameters such as vm.overcommit_memory, vm.swappiness, and net.core.somaxconn to optimize memory, swap, and network settings for database performance.
  • Disable Transparent Huge Pages (THP): THP can cause performance issues with database workloads. It's recommended to disable THP by setting transparent_hugepage=never in the kernel boot parameters.
  • Network Configuration: Tune network settings such as net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, and net.ipv4.tcp_tw_reuse to optimize network performance for MySQL.
  • Security Settings: Ensure that the OS is properly secured, including using firewall rules, restricting access to sensitive files, and keeping the OS and MySQL server software up to date with security patches.
  • Monitoring and Tuning: Continuously monitor the OS and MySQL server performance using tools like top, vmstat, iostat, and MySQL's built-in performance monitoring tools. Use the data collected to identify bottlenecks and tune the system accordingly.


Here are some general tips for optimizing the OS for MySQL:


File System: Use a file system that is optimized for database workloads. For Linux, ext4 or XFS are

commonly used. Ensure that the file system is mounted with appropriate options for database performance,

such as noatime and nodiratime. noatime and nodiratime are mount options used with the mount

command in Linux to optimize file system performance by reducing the amount of disk writes for file access timestamps.

Here's what each option does:

noatime: When the noatime option is used, the file system does not update the access
time (atime) of files when they are read. By default, most file systems update the atime
whenever a file is read, which can result in unnecessary disk writes. Using noatime
can reduce disk I/O and improve overall file system performance, especially on systems
with heavy read loads.

nodiratime: Similar to noatime, the nodiratime option prevents the update of
access times for directories. This can further reduce disk I/O, especially for applications
that frequently access directories but not the files within them.

mount -o remount,noatime,nodiratime /dev/sda1 /mnt/data

I/O Scheduler: Use an I/O scheduler that is suitable for database workloads. For example, on Linux,

you might choose the deadline or noop scheduler instead of the default cfq scheduler.

For database workloads, particularly with MySQL, using the right I/O scheduler can improve performance.

The choice of scheduler depends on your specific workload, storage configuration, and Linux distribution.

However, some general guidelines can help:

Deadline Scheduler: The Deadline scheduler is often recommended for database
workloads. It aims to minimize I/O latency for read and write requests, which can
be beneficial for databases that require low latency.


NOOP Scheduler: The NOOP scheduler is another option, especially for storage
evices with their own I/O scheduling mechanisms, such as SSDs.
NOOP doesn't re-order requests and can be useful when the storage device is
capable of handling request scheduling itself.


CFQ Scheduler: The Completely Fair Queuing (CFQ) scheduler, which is the
default for many Linux distributions, may not be as well-suited for
database workloads. CFQ is more general-purpose and may not provide the
same level of performance for database I/O patterns.

To change the I/O scheduler, you can use the echo command to write the scheduler name to the appropriate sysfs

file. For example, to change the scheduler for /dev/sda to Deadline:

bash

echo deadline > /sys/block/sda/queue/scheduler

File System Cache: Adjust the file system cache settings to ensure that enough memory is available for caching frequently

accessed data and reducing disk I/O.

Swappiness: Set the swappiness parameter to a lower value (e.g., 10) to reduce the likelihood of the OS swapping memory to disk, which can negatively impact database performance.

Kernel Parameters: Adjust kernel parameters such as vm.overcommit_memory, vm.swappiness, and net.core.somaxconn to optimize memory, swap, and network settings for database performance.

vm.overcommit_memory parameter in Linux controls the kernel's overcommit handling behavior for memory allocation. It determines whether the kernel allows processes to allocate more memory than is physically available on the system. There are three possible values for this parameter:

0 (default): The kernel performs "overcommit" memory management. It allows processes to allocate more memory than is physically available, relying on the fact that not all allocated memory is used at once. This can lead to out-of-memory (OOM) errors if the system runs out of physical memory.


1: The kernel ensures that there is always enough memory to satisfy the demands of all processes. It may reject memory allocations if it believes there is insufficient memory available. This setting can help prevent OOM errors but may limit the ability to start new processes if memory is fragmented.


2: The kernel allows overcommitment, but it also provides strict accounting so that it will not allocate memory if it's not sure it can be used. This setting is often recommended for database workloads like MySQL to prevent the database from being killed due to memory allocation failures.

For database servers with predictable memory usage patterns, setting vm.overcommit_memory to 2 is often recommended to avoid unexpected process termination due to memory allocation failures.

net.core.somaxconn parameter in Linux controls the maximum number of connections that can be queued to a socket waiting for acceptance. When a server receives connection requests faster than it can process them, the requests are queued until they can be processed. This parameter sets the maximum size of this queue.

Increasing net.core.somaxconn can be beneficial for server applications, such as MySQL, that handle a large number of incoming connections. By increasing this value, you allow the server to handle more incoming connections simultaneously, reducing the likelihood of clients experiencing connection timeouts or rejections due to a full connection queue.

To check the current value of net.core.somaxconn, you can use the following command:

bash

sysctl net.core.somaxconn

To temporarily change the value of net.core.somaxconn, you can use the sysctl command with the -w option:

bash

sudo sysctl -w net.core.somaxconn=1024

To make the change permanent across reboots, add the following line to your /etc/sysctl.conf file:

plaintext

net.core.somaxconn=1024

After editing sysctl.conf, you can apply the changes by running:

bash

sudo sysctl -p


Replace 1024 with the desired maximum queue size. It's important to monitor your system's performance after changing this parameter to ensure that it meets your application's requirements without causing resource exhaustion.

Disable Transparent Huge Pages (THP): THP can cause performance issues with database workloads. It's recommended to disable THP by setting transparent_hugepage=never in the kernel boot parameters.

Transparent Huge Pages (THP) is a feature in the Linux kernel that improves memory management efficiency by using larger memory pages (known as huge pages) compared to the standard page size. Standard pages are typically 4KB in size, while huge pages can be 2MB or even larger, depending on the system configuration.

THP works by transparently and automatically allocating and managing these huge pages, without requiring any changes to the application code. When an application requests memory, the kernel can allocate memory using huge pages if certain criteria are met. This can reduce the overhead associated with managing a large number of small pages and improve performance for memory-intensive applications.

Disabling Transparent Huge Pages (THP) can be beneficial for database workloads, including MySQL, as it can help reduce latency and improve performance. THP is a feature in Linux that allows the kernel to automatically manage large memory pages (2MB or 1GB in size) to improve memory management efficiency. However, for certain workloads, especially those with high memory allocation and deallocation rates like databases, THP can introduce performance issues due to increased memory fragmentation and management overhead.

To disable THP temporarily, you can use the following commands:

bash

echo never > /sys/kernel/mm/transparent_hugepage/enabled

echo never > /sys/kernel/mm/transparent_hugepage/defrag


To disable THP permanently, you can add the following lines to your /etc/rc.local file (create the file if it doesn't exist):

bash

if test -f /sys/kernel/mm/transparent_hugepage/enabled; then

echo never > /sys/kernel/mm/transparent_hugepage/enabled

fi

if test -f /sys/kernel/mm/transparent_hugepage/defrag; then

echo never > /sys/kernel/mm/transparent_hugepage/defrag

fi

Network Configuration: Tune network settings such as net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, and net.ipv4.tcp_tw_reuse to optimize network performance for MySQL.

net.ipv4.tcp_max_syn_backlog:

This parameter defines the maximum number of pending TCP SYN (synchronize) packets that can be queued for processing before the kernel starts to drop new incoming connections. SYN packets are part of the TCP handshake process used to establish connections.


Increasing net.ipv4.tcp_max_syn_backlog can help prevent the loss of incoming connection requests during high traffic periods or when the server is under heavy load. It allows the server to queue more SYN packets, reducing the likelihood of SYN packet drops.


net.ipv4.tcp_tw_reuse:

This parameter enables or disables the reuse of TIME_WAIT sockets. When a TCP connection is closed, it enters the TIME_WAIT state for a period of time to ensure that any delayed packets related to the connection are not mistaken for new connections.


Enabling net.ipv4.tcp_tw_reuse allows the kernel to reuse TIME_WAIT sockets for new connections if it can ensure that the new connection won't receive packets from the old connection. This can help reduce the number of sockets in the TIME_WAIT state and conserve resources, especially in high-traffic environments.

For MySQL, setting the net.ipv4.tcp_max_syn_backlog and net.ipv4.tcp_tw_reuse parameters can help optimize TCP connection handling and improve performance, especially in environments with high connection rates. Here's how these parameters can be set for MySQL:

net.ipv4.tcp_max_syn_backlog:

Setting net.ipv4.tcp_max_syn_backlog to a higher value allows the server to queue more TCP SYN packets, which are used to establish new connections. This can be beneficial for MySQL servers that experience a high rate of incoming connection requests.


For example, to set net.ipv4.tcp_max_syn_backlog to 4096, you can use the following command:
Bash

sysctl -w net.ipv4.tcp_max_syn_backlog=4096

To make the change permanent, add the following line to your /etc/sysctl.conf file:
Plaintext

net.ipv4.tcp_max_syn_backlog=4096

Net.ipv4.tcp_tw_reuse:

Enabling net.ipv4.tcp_tw_reuse allows the kernel to reuse TIME_WAIT sockets for new connections if it can ensure that the new connection won't receive packets from the old connection. This can help reduce the number of sockets in the TIME_WAIT state and conserve resources.


To enable net.ipv4.tcp_tw_reuse, use the following command:
Bash

sysctl -w net.ipv4.tcp_tw_reuse=1

To make the change permanent, add the following line to your /etc/sysctl.conf file:
Plaintext

net.ipv4.tcp_tw_reuse=1

Monday, May 06, 2024

MySQL Notes

Notes :


Locking :

Shared Lock (S-lock): A shared lock allows multiple transactions to read a row concurrently, but it prevents any transaction from modifying the row until the shared lock is released. Shared locks are used for SELECT statements.

Exclusive Lock (X-lock): An exclusive lock allows a transaction to both read and modify a row, but it prevents any other transaction from accessing the row until the exclusive lock is released. Exclusive locks are used for INSERT, UPDATE, and DELETE statements.

Next-key lock : In MySQL, a next-key lock is a lock type used to implement index-record locks for multi-version concurrency control (MVCC). Next-key locks are used to prevent phantom reads and to ensure the isolation levels defined by the SQL standard.

Next-key locks are a combination of a record lock and a gap lock. They lock the index entry for the record and the gap before that record, effectively preventing other transactions from inserting a new record that would fall within the range covered by the lock.

Record lock : a record lock is a type of lock that is used to manage concurrent access to individual rows in a table. When a transaction accesses a row for reading or writing, MySQL automatically acquires a record lock on that row to prevent other transactions from modifying it concurrently.

Record locks in MySQL are used to enforce the isolation levels defined by the SQL standard, such as READ COMMITTED and REPEATABLE READ. These locks ensure that transactions can read and modify rows without interference from other transactions, thus maintaining data integrity.

Gap lock : a gap lock is a type of lock that is used to prevent phantom reads. Phantom reads occur when a transaction reads a range of rows that satisfy a condition, but another transaction inserts new rows that also satisfy the condition, causing the first transaction to see additional rows in subsequent reads.

Auto-INC : a auto-inc locks are used to manage the auto-increment values for columns that are set to auto-increment. When a new row is inserted into a table with an auto-increment column, MySQL automatically generates a unique value for that column.

The AUTO-INC lock is a special type of lock that is used to ensure that each new value generated for the auto-increment column is unique, even in a multi-user environment. When a new row is inserted, MySQL takes an AUTO-INC lock on the table to prevent other transactions from inserting rows with the same auto-increment value.

Intention Lock : an intention locks are used to indicate the intention of a transaction to modify a table at a certain level (e.g., row-level or table-level), without actually locking the rows or the table. Intention locks are a way for MySQL to manage locks efficiently and to prevent conflicts between transactions that might be attempting to modify the same table or rows.

There are two types of intention locks in MySQL:

Intention Shared (IS): An intention shared lock indicates that a transaction intends to read rows in a table. It does not prevent other transactions from reading or acquiring IS locks on the same table, but it prevents transactions from acquiring exclusive locks (X locks) on the table.


Intention Exclusive (IX): An intention exclusive lock indicates that a transaction intends to modify rows in a table. It prevents other transactions from acquiring IS or IX locks on the same table, but it does not prevent other transactions from acquiring shared locks (S locks) on the table.\

Intention locks are acquired automatically by MySQL when a transaction acquires row-level locks or table-level locks. They are used to prevent conflicting lock requests from other transactions and to ensure that transactions can proceed without deadlock.


Predicate locks : a Predicate locks are used to manage concurrent access to spatial indexes in tables that contain spatial data. To handle locking for operations involving SPATIAL indexes, next-key locking does not work well to support REPEATABLE READ or SERIALIZABLE transaction isolation levels. There is no absolute ordering concept in multidimensional data, so it is not clear which is the “next” key.





Isolation level :- isolation level refers to the degree to which one transaction's changes are visible to other concurrent transactions. MySQL supports several isolation levels, each offering different trade-offs between concurrency and data consistency. The isolation levels defined by the SQL standard are:

READ UNCOMMITTED: This is the lowest isolation level. It allows transactions to read data that has been modified by other transactions but not yet committed. This level offers the highest concurrency but sacrifices data consistency, as transactions can see "dirty" reads.


READ COMMITTED: This level ensures that transactions only see data that has been committed by other transactions. It prevents dirty reads but allows non-repeatable reads, where a transaction may see different results for the same query if other transactions commit changes in between.


REPEATABLE READ: This level ensures that once a transaction reads a row, it will see the same data for that row for the duration of the transaction. It prevents non-repeatable reads but allows phantom reads, where a transaction may see new rows inserted by other transactions after its first read.


SERIALIZABLE: This is the highest isolation level. It ensures that transactions are completely isolated from each other, so they cannot see changes made by other transactions until they are committed. It prevents dirty reads, non-repeatable reads, and phantom reads but can lead to reduced concurrency due to increased locking.In MySQL, the default isolation level is REPEATABLE READ, but you can change it for a specific session using the SET TRANSACTION ISOLATION LEVEL statement



MySQL 8 Features :


Common Table Expression (CTE) using the WITH keyword. - A common table expression (CTE) is a named temporary result set that exists within the scope of a single statement and that can be referred to later within that statement, possibly multiple times.

Window Functions - MySQL supports window functions that, for each row from a query, perform a calculation using rows related to that row.

Innodb_redo_log_enabled - Disable redo logging using. This functionality is intended for loading data into a new MySQL instance. Disabling redo logging speeds up data loading by avoiding redo log writes and doublewrite buffering.

innodb_extend_and_initialize - Controls how space is allocated to file-per-table and general tablespaces on Linux systems.

When enabled, InnoDB writes NULLs to newly allocated pages. When disabled, space is allocated using posix_fallocate() calls, which reserve space without physically writing NULLs.

innodb_log_writer_threads - Enables dedicated log writer threads for writing redo log records from the log buffer to the system buffers and flushing the system buffers to the redo log files. Dedicated log writer threads can improve performance on high-concurrency systems, but for low-concurrency systems, disabling dedicated log writer threads provides better performance.


Reason for Slow Query and Way to Query improvement :
  • Excessive Full Table Scans
  • Wrong Index - join or filter condition is missing or references the wrong table
  • No Index Used
  • Not a Left Prefix of Index
  • Data Types Not Matching
  • Functional Dependencies
  • Improving the Index Use
  • Add a Covering Index
  • Rewriting Complex Index Conditions (use CTE & Window Functions)
  • Splitting a Query Into Parts
  • SKIP LOCKED
  • Avoid Many OR or IN Conditions

Variables Innodb :


Option Name Default Value Comments Comment1
innodb_buffer_ pool_size 128 MiB The total size of the innoDB buffer pool. 100 - ( Innodb_pages_read / Innodb_buffer_pool_read_requests)
## SELECT Variable_name, Variable_value FROM sys.metrics WHERE Variable_name IN ('Innodb_pages_read', 'Innodb_buffer_pool_read_requests')\G
innodb_buffer_pool_instances Auto-sized how many parts the buffer pool is split into. The default is 1 if the total size is less than 1 giB and otherwise 8. for 32-bit Windows, the default is 1 below 1.3 giB; otherwise, each instance is made to be 128 MiB. The maximum number of instances is 64.
innodb_buffer_pool_dump_pct 25 The percentage of the most recently used pages in the buffer pool that are included when dumping the pool content (backing it up).
innodb_old_blocks_time 1000 how long in milliseconds a page must have resided in the old blocks sublist before a new read of the page promotes it to the new blocks sublist.
innodb_old_blocks_pct 37 how large the old blocks sublist should be in percentage of the whole buffer pool.
innodb_io_capacity 200 how many i/o operations per second innoDB is allowed to use during nonurgent conditions.
innodb_io_capacity_max 2000 how many i/o operations per second innoDB is allowed to use during urgent conditions.
innodb_flush_method unbuffered or fsync The method innoDB uses to write the changes to disk. The default is unbuffered on Microsoft Windows and fsync on Linux/unix.
innodb_log_buffer_size 16 MiB The size of the log buffer where redo log events are stored in memory before being written to the on-disk redo log files.
innodb_log_file_size 48 MiB The size of each file in the redo log. 100 * ((log_ lsn _ last _ checkpoint - log_ lsn _ current) / (#log files * log file size) ) Mysql> SET GLOBAL innodb_monitor_enable = 'log_lsn_current', GLOBAL innodb_monitor_enable = 'log_lsn_last_checkpoint';
mysql> SELECT * FROM sys.metrics WHERE Variable_name IN ('log_lsn_current', 'log_lsn_last_checkpoint')\G
innodb_log_files_in_group 2 The number of files in the redo log. There must be at least two files.



Index-Level Optimizer Hints :
## 8.0.19 and earlier:
SELECT ci.CountryCode, co.Name AS Country, ci.Name AS City, ci.District
FROM world.country co IGNORE INDEX (Primary)
INNER JOIN world.city ci FORCE INDEX FOR ORDER BY (CountryCode)
ON ci.CountryCode = co.Code
WHERE co.Continent = 'Asia'
ORDER BY ci.CountryCode, ci.ID;

This query has two index hints, IGNORE INDEX in the second line and USE INDEX FOR ORDER BY in the third line.
## MySQL 8.0.20 and above, you can write the query as:
SELECT /*+ NO_INDEX(co PRIMARY) ORDER_INDEX(ci CountryCode) */
ci.CountryCode, co.Name AS Country, ci.Name AS City, ci.District
FROM world.country co
INNER JOIN world.city ci
ON ci.CountryCode = co.Code
WHERE co.Continent = 'Asia'
ORDER BY ci.CountryCode, ci.ID;



New Hint Old Hint
JOIN_INDEX FORCE INDEX FOR JOIN
NO_JOIN_INDEX IGNORE INDEX FOR JOIN
GROUP_INDEX FORCE INDEX FOR GROUP BY
NO_GROUP_INDEX IGNORE INDEX FOR GROUP BY
ORDER_INDEX FORCE INDEX FOR ORDER BY
NO_ORDER_INDEX IGNORE INDEX FOR ORDER BY
INDEX FORCE INDEX
NO_INDEX IGNORE INDEX



Skip Replication Errors in GTID-Based Replication




Way 1:



This sequence of MySQL commands is used to manually set the Global Transaction Identifier (GTID) position for a replication slave and then resume replication from that point onward. Here's a breakdown of each command:

There are different ways to find the failed transaction. You can examine the binary logs or you can also check Retrieved_Gtid_Set and Executed_Gtid_Set from the SHOW SLAVE OUTPUT as we can see in the example. This slave server has retrieved transactions 1 to 5 but has only executed 1 to 4. That means that transaction 5 is the one that is causing the problems.


STOP SLAVE;: This command stops the replication process on the slave server.


SET GTID_NEXT="7d72f9b4-8577-11e2-a3d7-080027635ef5:5";: This command sets the next GTID to be executed on the slave. The format is server_uuid:transaction_id. In this example, it sets the next GTID to 7d72f9b4-8577-11e2-a3d7-080027635ef5:5.


BEGIN; COMMIT;: These are standard SQL commands to begin and commit a transaction. They are used here to ensure that the GTID position is set within a transaction boundary.


SET GTID_NEXT="AUTOMATIC";: This command resets the GTID position to automatic mode, which means that the slave will resume replication from the GTID specified in the previous step.


START SLAVE;: This command restarts the replication process on the slave server, starting from the GTID specified in the SET GTID_NEXT command.

This sequence of commands can be useful when you need to manually adjust the replication position on a MySQL slave, such as when you need to re-sync the slave from a specific point or after a failover.


Way 2: slave_exec_mode. This variable allows for two values, strict and idempotent. When you run into issues with a duplicate key or absent key errors in replication, strict mode will cause replication to fail until a user has corrected the issue with the data. If the idempotency in MySQL would figure that the following rules would apply: Insert: If you get an insert violation on a unique key, update the record on the slave so that it matches the state based on the full row image. Update: If you get an update error due to the record not being in the table, insert the record so that it matches the state based on the full row image. Delete: If you get a delete error stating that the target record cannot be found in the table, do nothing and move on to the next statement.mysql> start slave; Query OK, 0 rows affected (0.09 sec) mysql> show slave status \G *************************** 1. row *************************** ..... Last_Error: Could not execute Update_rows event on table idem.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log centos7-1-bin.000003, end_log_pos 1488 ..... mysql> stop slave; Query OK, 0 rows affected (0.11 sec) mysql> set global slave_exec_mode = 'idempotent'; Query OK, 0 rows affected (0.00 sec) mysql> start slave; Query OK, 0 rows affected (0.12 sec) mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event ..... Slave_IO_Running: Yes Slave_SQL_Running: Yes ..... Seconds_Behind_Master: 0 ..... Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates ..... 1 row in set (0.00 sec) mysql> select * from t1; +----+------+ | c1 | c2 | +----+------+ | 1 | 2 | +----+------+ 1 row in set (0.00 sec)




Way 3 :


This sequence of commands is used to skip a single transaction on a MySQL replication slave while the GTID mode is set to ON_PERMISSIVE. Here's a breakdown of each command:

SELECT @@gtid_mode;: This command is used to check the current GTID mode on the server. The result will indicate whether GTIDs are enabled and which mode is active (ON, OFF, or ON_PERMISSIVE).


STOP SLAVE;: This command stops the replication process on the slave server.


SET GTID_MODE=ON_PERMISSIVE;: This command sets the GTID mode to ON_PERMISSIVE, which allows the server to accept transactions that don't have a GTID.


SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;: This command sets the number of transactions to skip on the slave. In this case, it is set to skip one transaction.


START SLAVE;: This command restarts the replication process on the slave server, skipping the specified number of transactions.


pager grep Seconds: This command sets up a pager in the MySQL client to filter the output of subsequent commands. In this case, it filters the output to show only lines containing the word "Seconds", which is useful for monitoring replication lag.


SHOW SLAVE STATUSG: This command displays the status of the slave server, including information about replication lag, errors, and more.


SET GLOBAL GTID_MODE=ON;: This command explicitly sets the global GTID mode to ON. This is redundant and unnecessary if the GTID mode is already set to ON_PERMISSIVE.

Overall, these commands are used to temporarily switch the GTID mode to ON_PERMISSIVE, skip a single transaction on the slave, and then revert the GTID mode back to ON. This can be useful in situations where you need to recover from a specific replication error or discrepancy.

Way 4:

Normally, replication stops when an error occurs on the replica, which gives you the opportunity to resolve the inconsistency in the data manually. This option causes the replication SQL thread to continue replication when a statement returns any of the errors listed in the option value.

Do not use this option unless you fully understand why you are getting errors. If there are no bugs in your replication setup and client programs, and no bugs in MySQL itself, an error that stops replication should never occur. Indiscriminate use of this option results in replicas becoming hopelessly out of synchrony with the source, with you having no idea why this has occurred.

Examples: --slave-skip-errors=1062,1053 --slave-skip-errors=all --slave-skip-errors=ddl_exist_errors


MySQl Optimization Queries :


## Duplicate Indexes

SELECT s.INDEXED_COL,GROUP_CONCAT(INDEX_NAME) FROM (SELECT INDEX_NAME, GROUP_CONCAT
(CONCAT (TABLE_NAME,'.',COLUMN_NAME) ORDER BY CONCAT (SEQ_IN_INDEX,'COLUMN_NAME'))
'INDEXED_COL' FROM INFORMATION_SCHEMA.STATISTICS GROUP BY INDEX_NAME)as s
GROUP BY INDEXED_COL HAVING COUNT(1)>1;

## Unused Indexes
SELECT * from sys.schema_unused_indexes where index_name not like 'fk_%' and
object_schema not in ( 'performance_schema', 'mysql' , 'information_schema');

## List tables by the size of data and indexes

select table_schema as database_name, table_name, round(1.0*data_length/1024/1024, 2)
as data_size, round(index_length/1024/1024, 2) as index_size,
round((data_length + index_length)/1024/1024, 2) as total_size from information_schema.tables
where table_schema not in('information_schema', 'mysql', 'sys', 'performance_schema')
-- and table_schema = 'your database name' order by total_size desc;

## Show Indexes for All Tables in Database

SELECT DISTINCT TABLE_NAME, INDEX_NAME FROM INFORMATION_SCHEMA.STATISTICS
WHERE TABLE_SCHEMA = 'sample';

## List all Indexes of All Database Schema

SELECT DISTINCT TABLE_NAME, INDEX_NAME FROM INFORMATION_SCHEMA.STATISTICS;

## Query innodb_locks
select * from information_schema.innodb_locks;

## Query to see metadata locks and the queries that cause them

SELECT ml.object_type
, ml.object_schema
, ml.object_name
, ml.lock_type
, ml.lock_duration
, ml.lock_status
, t.*
, concat('KILL ', t.processlist_id, ';') AS kill_command
, concat('CALL mysql.rds_kill(', t.processlist_id, ');') AS rds_kill_command
FROM performance_schema.metadata_locks ml
INNER JOIN performance_schema.threads t ON t.thread_id = ml.owner_thread_id
WHERE t.type = 'FOREGROUND'
AND t.processlist_command != 'Sleep'
AND t.processlist_command != 'Daemon'
AND t.processlist_id != connection_id()
ORDER BY t.processlist_time DESC;

#Query to see what queries are blocking other
SELECT block.trx_id AS blocking_trx_id
, block.trx_query AS blocking_trx_query
, time_to_sec(timediff(now(), req.trx_wait_started)) AS requesting_trx_wait_sec
, req.trx_id AS requesting_trx_id
, req.trx_query AS requesting_trx_query
, concat('KILL ', bt.processlist_id, ';') AS kill_command
, concat('CALL mysql.rds_kill(', bt.processlist_id, ');') AS rds_kill_command
FROM information_schema.innodb_lock_waits lw
INNER JOIN information_schema.innodb_trx block ON block.trx_id = lw.blocking_trx_id
INNER JOIN performance_schema.threads bt ON bt.thread_id = block.trx_mysql_thread_id
INNER JOIN information_schema.innodb_trx req ON req.trx_id = lw.requesting_trx_id
ORDER BY requesting_trx_wait_sec DESC
, block.trx_id
, req.trx_id;

## Requested locks and locks held by InnoDB transactions:

SELECT l.lock_type
, l.lock_table
, l.lock_index
, CASE
WHEN l.lock_mode = 'S' THEN 'SHARED'
WHEN l.lock_mode = 'X' THEN 'EXCLUSIVE'
WHEN l.lock_mode = 'IS' THEN 'INTENTION_SHARED'
WHEN l.lock_mode = 'IX' THEN 'INTENTION_EXCLUSIVE'
ELSE l.lock_mode END AS lock_mode
, time_to_sec(timediff(now(), trx.trx_started)) AS trx_length_sec
, trx.*
, concat('KILL ', t.processlist_id, ';') AS kill_command
, concat('CALL mysql.rds_kill(', t.processlist_id, ');') AS rds_kill_command
FROM information_schema.innodb_locks l
INNER JOIN information_schema.innodb_trx trx ON trx.trx_id = l.lock_trx_id
INNER JOIN performance_schema.threads t ON t.thread_id = trx.trx_mysql_thread_id
ORDER BY trx.trx_wait_started IS NOT NULL
, trx.trx_wait_started
, trx_length_sec DESC;













Optimizing the Linux (OS) for Database

Optimizing the Linux (OS) for Database Optimizing the operating system (OS) for a Database Server/MySQL server involves configuring various ...